The Intel ME and and AMD PSP are still executing proprietary code on an independent processor in your CPU package all the time, with full system access.
Yeah you only FTrace your entire networking stack at watch if it ever sends/receives packets without your knowledge. Or use libpcap and accomplish the same task. Or use a user space packet stack stack and disable your default network interface.
I get not everything on the system is pure FOSS. But every binary ball isn't NSA spyware. If you assume that is true, you literally cannot use ANY computer.
FOSS OS's make it nothing but a question of work-hours to do the full trust but verify paradigm.
POWER8/9 isn't fully open. Read the license agreements. While you get access to a lot of stuff, there is a lot of fine print you are ignoring. A lot of the deep docs are behind paywalls.
To get access to POWER8/9 literature you sign away your rights to OPEN-POWER. Also if you make anything for POWER8/9, under a public license (what license you can/can't use are dictated by the license agreement), using docs obtained from an OPEN-POWER member company if that member company upon leaving the OPEN-POWER may claim ownership of your code.
They'll really only let you use 3 Clause BSD or Apache2. Linux has the only exception for GPLv2, and GPLv3 is banned, using it on a project can have your membership to OPEN-POWER revoked, and your code ownership transferred to IBM. If they decide to purpose it.
OPEN-POWER isn't open. The docs are free and if you write anything too useful a high paying member can seize your software. The only protection from this, is to buy in as a high level enterprise member. OPEN-POWER is down right predatory for research free-tier membership.
You can pay a company to fab a Leon3 or Leon4 for you. Leon3 is GPL with eASIC already supporting it in their Nextreme's. There's also Rocket RISC-V core that was fabbed on a 45nm SOI process. Do it on the same node with anything extra an external, swappable component on the PCB for supplier diversity. Additionally, Cambridge has FreeBSD running on a capability-secure version of 64-bit MIPS on FPGA's. It's called CHERI CPU and CHERIBSD. One might put that processor on an ASIC.
There's been many options but basically little individual, non-profit, or corporate work to make them happen. (shrugs)
Deep-packet inspection doesn't help if they leak along RF or covert channels in legit traffic. Catching RF leaks outside the most common spectrum is also something requiring expensive equipment and talent. The RF methods are in the TAO catalog as pre-built tools.
Yes - if we're going to include ultra-sonics, air-gap spanning networks, things of that nature... yeah, it gets very quickly into the range of nearly impossible to catch.
Especially if it's intermittent or simply passive. Then you could have an embedded issue for years and never know (I've long suspected that this could eventually be a problem for Defense companies)
I'm for different threat profiles with different schemes targeting them. We already have regular, security researchers and black hats hitting manipulation of flash, RAM cells, sound/speakers, and I/O firmware. It has to be in the threat model at least on the software side. Unfortunately, esp given the speeds of these things, mitigation probably demands new hardware either in general (eg custom RAM) or for detection (eg verifier of RAM's expected behavior). My old scheme of diverse, triple-redundant hardware with voting algorithms just can't match performance needed of modern workstations and servers in software alone. Maybe not FPGA's either.
The thing that really scares me, as a fortunately ex-security guy, is the fact that everywhere I've worked for the last 10 years people are super casual about keyboards and mobiles.
Mobile phones are an amazing platform to do... well, almost anything. There are some areas where their possession is restricted, though I suspect a motivated party could sneak a stripped down mobile device into nearly anywhere.
Keyboards, on the other hand. Wow. I've seen even airgapped systems have random keyboards right off the pallet slapped onto them. These sit in racks for months or years, then get tossed usually to a recycler, a donation program, stolen, or just thrown into a dumpster.
Considering how much tech is in a keyboard, and how much volume it has, you could place nearly anything in there and possibly go ages without catching on.
A scenario that I recently pointed out as a 'thought exercise' was a refitted USB keyboard with a microphone, pinhole camera, and simple keylogger+screenshot engine that contained an intermittent RF/wifi/bluetooth/ultrasonic network. Programmed to dump its payload whenever an individual passed nearby and triggered it remotely.
Such a trojan could sit in a datacenter or conference room for years completely unnoticed. The data it captured transmitted only to the cleaning crew or whatever.
Worse yet, such a device could also pass instructions to the system it was attached to as an actual USB device.
You could fit a lot of horsepower in an innocuous Dell or MS or whatever mass-produced keyboard. Toss it into a top level conference room for corporate espionage, toss it into a data center for more direct trouble, whatever.
Scary thought, and I think part of why I still use the same keyboard I've had since 1997 ;)
"Considering how much tech is in a keyboard, and how much volume it has, you could place nearly anything in there and possibly go ages without catching on."
You're thinking on the right lines. I've thought of weaponizing them, too. Main reason most don't is someone might look inside one. Even if it's not the target, finding something obvious could make the news with result that attack no longer works. That's why NSA weaponizes the USB connectors themselves. I do think there's room for doing what NSA is doing in a mobile-style SoC that replaces main MCU of the keyboard with same labeling. People would be none the wiser unless carefully measuring electrical properties.
" and I think part of why I still use the same keyboard I've had since 1997 ;)"
Haha. I keep updating but I stopped trusting the computers a while back. Far as subversion, most PC-level subversions seem to have started close to 2000 with NSA's programs kicking in around 2004. So, I recommend people use pre-2004 or pre-2000 tech. Plenty of usable stuff in that category.
It's a hard problem. That's why DARPA is throwing tons of money and brains at it right now. Also why a number of defense contractors maintain their own fabs and packaging plants despite the technology aging.
And your point is? It's not a hard problem. You either trust your fabricator and tools, or you don't.
End of freaking story. If you don't, and/or you can't throw a fab plant at it, your options are limited. You can read all the docs on the open CPU stuff (as far as that goes), you can literally do everything from scratch, but unless you're a Nation State or a massive company, you're pretty much wasting your time.
/edit: I don't mean this as a criticism of your writeup (where you basically state the same thing), or even some of your other comments (where you state much the same thing).
It's literally an issue where there are VERY few people in the entire world capable of doing cutting edge processor design, coding, and implementation. They cost phenomenal amounts of money, and even with the money, people, and the best of intentions - a motivated nation state actor can muck things up.
The only real defense we have, as regular people, is to basically see if anything we own is misbehaving. Even that is a specialized skill set and time investment beyond what most folks are interested in committing.
"you either trust your fabricator and tools, or you don't."
Equals you trust blindly or don't trust at all. There's a whole range of verifiability between those two. It's worth exploring.
"but unless you're a Nation State or a massive company, you're pretty much wasting your time."
There's smaller firms on buying and supply side of the equation benefiting from simpler, easier-to-inspect stuff. Especially for energy or cost savings. Examples include Moore's Forth processors, Java CPU's, Plasma MIPS (FOSS), 16-bitters in smartcards, etc. Those on 0.35 micron or up can have random samples inspected by eye with microscopes if user wants to go extra mile. Alternatively, they at least have black boxes they can analyze or test for conformance to white-box designs they're supposed to be. Or more easily monitor at analog or digital levels for inconsistencies w/ power shut off during such an event.
Even the aerospace/defense companies I work with use almost 100% off-the-shelf hardware pre-installed with OS and software by the vendors, or at least with some minimal IT work (often offshore).
I'm happy to hear that there are some smaller fabs and things that are easier to inspect, but I think that the commodity level of most hardware still makes it super unlikely.
If a company is willing to use Office 365 (which I can neither confirm nor deny some very large shops might use, but wouldn't be out of the usual), you cannot seriously expect them to pay proper attention to what their processors are doing.
I would hope that if someone worked in high-clearance, there would be MANY such measures in place. The ease with which a fully loaded laptop can walk out of Los Alamos wouldn't really lend credence to them patching the biggest hole... the people.
You are my favorite form of security guy - the insanely suspicious sort who is always looking for the weakest point. But when it comes right down to it, there's billions of weak points at much higher levels and much more easily compromised than a chipset or compiler. It's a good academic exercise though.
"but I think that the commodity level of most hardware still makes it super unlikely."
"If a company is willing to use Office 365"
Oh I'm with you on this. Steve Walker's Computer Security Initiative and the Orange Book gave us lots of highly-secure stuff for defense, etc. They got rid of that for cheap, fast, fully-featured COTS. The same happens in business, aerospace, etc. The exceptions are usually pre-made appliances or, in aerospace, better components in the DO-178B Level A stuff. Much of it is shoddy. A good chunk of the defense fabs' business is probably replacing legacy parts in old equipment at prices guaranteed through corrupt contracts. They don't give a shit about security in general: just money. ;)
"But when it comes right down to it, there's billions of weak points at much higher levels and much more easily compromised than a chipset or compiler."
There's lots of weak points. Stopping code injection from all known vectors with simple, proven techniques at CPU and language levels eliminates the whole malware problem if apps are whitelisted, built from source, and include no executable scripting/JIT. There's ways to conveniently enforce POLA within a system (eg CapDesk), do secure (even automated) configurations of networks (Boeing's Survivability Grammers), and so on. There's components for most use-cases just waiting to be productized, integrated and sold to larger audience. Given this is 90+% of attacks, it's certainly worth pushing to establish a stronger baseline for companies that want less loss of secrets, availability, data, etc.
There's only so much the traditional methods like coaching and monitoring can do if one can simply open a folder (not a file!) that immediately results in full-control of machine by malware since a thumbnail rendered. Endless crap like that exploiting underlying foundation of quicksand. The HW and SW of endpoints, at least at lowest layers, need to get in check for that other stuff to be meaningful. I'm also in favor of an integrated networking stack that makes different applications, even if using TCP or HTTP, look visibly different at the packet level so NIDS spots weird patterns more easily. Like the MLS extension but not MLS policy itself. Do it at application layers with stuff like Ethos's eTypes or security-enhanced ZeroMQ where developers don't worry about plumbing much.
" It's a good academic exercise though."
It's also an industry bringing in tens of millions of dollars at least. That's with costs that are too high, lack of key software support, and little to no advertising. I imagine it could be larger than tens of millions with such obstacles reduced or eliminated.
I think it would an interesting exercise to attempt to build a reasonably modern device that's truly audited and secure head-to-toe.
If it could be done with proper 'design by contract', inspections, and at a cost that folks could swallow, you might have a new Apple 2 on your hands. Not in the corporate world, at least not immediately (where tomorrow's profits outweigh next-week's), but among security cautious folks and researchers.
I'd love to see if a laptop, for example, could be built to that standard. And if built, if it could actually accomplish real (not hobbyist) work. BlacktOPS or something catchy :)
> But every binary ball isn't NSA spyware. If you assume that is true, you literally cannot use ANY computer.
You can use one, but you can expect it's exploited. It might not be a happy fact, but we shouldn't deny it if it's true.
The NSA by itself has 40,000 employees, tens of billions in budget, the best tools and tech in the world, and a track record of doing such things. I expect that if they see a valuable vulnerability, they will develop an exploit.
Most of them aren't exploiting software. I like to instead give the dollar amount they put into backdooring, hacking, or tapping major software or networks: $200+ million a year. Still a staggering amount that supports your point. If they doubt, just ask if they think whatever they're using is safe from all the black hats NSA could hire for just 1% of that amount? And for how long?
> Great sweeping accusations require citations to back them up. Without any citations, the above statement is meaningless hyperbole.
That's basically what I was always telling myself in the back of my mind during the latest US election, every time I heard that the Wikileaks DNC email leaks were originating from Russia :D
That's a bizarre accusation and very easy to fact-check for yourself. It's trivial to run a packet sniffer and see all the information being sent out of your network.
I know for sure that my apple and my linux boxes aren't making any network connections that I don't understand.
Unless you are running a sniffer on a separate device, you're likely far less sure of that than you think you are. There are multiple components in modern computers that have some level of system access and aren't terribly constrained by the kernel (possibly including the processor on your network card itself), and that's before you get into the kernel itself lying to you (either by design or through a module).