

Hosting backdoors in hardware - dutchbrit
https://blogs.oracle.com/ksplice/entry/hosting_backdoors_in_hardware

======
cs702
Every PCI device in your computer could be hosting a backdoor: _" The PCI
specification defines an 'expansion ROM' mechanism whereby a PCI card can
include a bit of code for the BIOS to execute during the boot procedure. This
is intended to give the hardware a chance to initialize itself, but we can
also use it for our own purposes."_

The idea is to hook the interrupt handler that normally invokes BIOS code for
handling hard drive I/O at boot time so it invokes instead code in the PCI
device that applies a backdoor patch to the Linux kernel when it is read off
the disk. The author explains how to patch the Linux kernel by overwriting an
obscure, seldom-used error message string, and provides sample code for a
simple kernel patch that will listen for IP packets with an unused protocol
number and run any payload delivered via them on a Linux shell with root
privileges.

 _It 's scary how straightforward this looks._

Whenever I read things like this, I'm reminded that every day we're trusting
our data and privacy to the people who design, build, and distribute all the
hardware (and software) we use but didn't create ourselves from scratch.

To paraphrase Ken Thompson, we have little choice but to trust trust.[1]

\--

[1] [http://cm.bell-labs.com/who/ken/trust.html](http://cm.bell-
labs.com/who/ken/trust.html)

~~~
frank_boyd
In other words, there's a serious "open hardware" market opportunity waiting
to be captured.

~~~
daeken
There's really no way for open hardware to be better. Unless you have your own
fab, the only way to ensure that back doors haven't been added at the hardware
level is to decap and manually validate every chip. That's pretty clearly not
viable.

~~~
autodidakto
Don't mix up "better" with "perfect". That's the mistake made by people who
post a link to the trusting-trust: Trust isn't binary. "Perfect or give up" is
a false dichotomy.

I sometimes say "trust no one", which seems to support that false dichotomy,
but I mean to say "Reduce and distribute trust".

Open source code and hardware doesn't remove trust. It reduces the amount of
trust required and distributes it among more parties. It makes betrayal
harder, more expensive, more temporary, and less destructive. That's not
perfect, but that's much, much better.

There's a world of difference between closed source software from
MegaCorporatism Inc and open source -- even if an evil genius is still
technically capable of sneaking something into a compiler or chip.

~~~
jacquesm
Well, in this case it is a binary thing. When you talk about this in an
applied sense you're right, then it isn't a binary thing but when you talk
about feasibility then it is black-or-white. You're either sure or it might as
well be compromised.

~~~
Karunamon
Not necessarily. As the author of the article said, it's easier to rely on
software bugs than to ship a crocked piece of hardware like this. If you
manage to backdoor a mass-production piece of hardware (let's say this is a
wifi card or something else similar), you're just one odd error away from
someone curious finding out what's happening, raising the alarm, and the whole
operation comes crashing down.

From a logistics standpoint, it's easier to break software.

------
einhverfr
It's funny. Folks used to laugh at me for being too paranoid to allow loadable
kernel modules on my firewalls (actually I preferred to host the kernels on
read only media and have no lkm support all with custom compiled kernels).
What seemed horribly paranoid to so many people seems so reasonable today.

This being said, one of the key issues is that I could expect that a
compromised motherboard and controller (for example ethernet controller)
should be able to make such changes in ram after the boot process has
completed, with or without the help of BIOS. The level of paranoia certainly
needs to be stepped up a bit.

~~~
geofft
The thing is that distros tend to use loadable modules, and if you want to
avoid that you need to compile your own kernel (as you seem to be doing), and
at least I am a lot happier getting security updates from my distro than being
on the hook for recompiling them myself in a timely fashion.

You can get most of the security benefits of avoiding loadable modules by
setting the sysctl kernel.modprobe (i.e., /proc/sys/kernel/modprobe) to
"/bin/false" instead of "/sbin/modprobe", late in the boot process. So
everything needed to initialize your hardware is loaded, but anything that an
unprivileged user attempts to autoload (like a buggy kernel module for a
socket family you've never heard of) fails.

I have a config like this on all the security-sensitive servers I run, which
tend to have a few thousand unprivileged users. It's actually a shell script
that logs the attempt and then returns false, instead of silently returning
false, but "/bin/false" is good enough.

But do note that this is a bit orthogonal to the issue mentioned in the
article: the proposed attack involves the victim machine having the kernel and
modules intact on disk, but device firmware compromised so that it changes the
kernel after it's been loaded into memory.

~~~
dredmorbius
Building a kernel isn't difficult (and used to be pretty much required). Build
support in many distros is quite robust.

The real challenge is that once you've compiled a kernel, _that 's all you've
got_ in terms of support. If you need to add filesystem support, a networking
capability, additional driver support, etc., you've got to configure and build
a new kernel, _and test it_ , which is distinctly less convenient than
autoloading an existing module (or even one you've newly compiled in many
cases, on a running kernel).

I seem to recall a kernel option or syscontrol, possibly from OpenBSD/FreeBSD,
which prevents loading of additional modules once it's been set. This allows
you to boot and load modules, but then no more. If your boot media are read-
only, this gives a fairly high level of confidence.

I cannot find a reference though.

~~~
pedro84
On linux:

    
    
      echo 1 > /proc/sys/kernel/modules_disabled
    
      http://www.outflux.net/blog/archives/2009/07/31/blocking-module-loading/

~~~
dredmorbius
Awesome, thanks.

------
iuguy
You don't even need this. Patrick Stewin and Iurii Bystrov[1] released an
excellent paper earlier this year (and Patrick presented his research at
44CON[2] last week) on abusing Intel's iAMT functionality to create an in-
firmware keylogger and undetectable (from the host's perspective) exfiltration
mechanism for streaming out keystrokes and receiving malware updates.

[1] -
[http://stewin.org/papers/dimvap15-stewin.pdf](http://stewin.org/papers/dimvap15-stewin.pdf)

[2] - [http://44con.com/talks/#persistent-stealthy-remote-
controlle...](http://44con.com/talks/#persistent-stealthy-remote-controlled-
dedicated-hardware-malware)

~~~
routelastresort
Wait, you mean there's a better technical explanation than the CNN-quality
original article? (that's just trying to sell Oracle Slice)

------
bingaling
See also: the potential for rootkits on disk controllers:

[https://news.ycombinator.com/item?id=6148347](https://news.ycombinator.com/item?id=6148347)

------
bluedino
Doesn't this basically make it impossible to trust any hosting company that
provides the equipment for you? The NSA could have every server at Rackspace
backdoored.

~~~
hrjet
The more I think about this, I am convinced that the only way to avoid private
data snooping is to pollute / poison the data.

If all channels of communication are flooded with poisoned messages, it
wouldn't matter who / what snoops the data. The poisoning needs to be obvious
so that the intended recipient can immediately ignore it. At the same time, it
needs to be ubiquitous so that machines can't filter it.

~~~
moconnor
If the poisoning is obvious to the intended recipient, at some stage it will
also be obvious to anybody collecting enough (poisoned) data on them.

This is a bit like the spam / spam filter arms race.

~~~
hrjet
Yeah, human recipients and snoopers would both be able to filter out the
poison. But it would make automated collection difficult.

Another idea could be a reverse captcha. All messages by default could be
coded as images. (Hey in fact I think this is a brilliant idea if I say so
myself!) Combine that with poisoning, and we can be safe from automated
collection for atleast a decade. Combine that with encryption and other
security measures and that would be awesome.

In fact I am on a idea spree. What if messages were encoded with a captcha?
Enter the captcha to decode the message. This encoding is purely to eliminate
automated collectors and indexers.

Spam is easier to tackle since a spammer can be tainted for ever. But a
poisoned feed still needs to be processed every single time

------
gmuslera
It seem to be very specific for an OS and maybe even kernel version, how much
generic it could be? There aren't kernel security modules that could detect if
something had been overwritten? The paper was written in 2010, maybe was
something done in the kernel development to be able to check for that.

What about detection of that kind of backdoors is being present in your
current hardware? It is possible if a backdoor is running? Or, i.e. loading
some not very used OS for that kind of validation (i.e. some of the BSDs or a
kernel with modules disabled) that could avoid to run the backdoor and being
able to do that detection.

------
dobbsbob
Wouldn't your stateful packet inspection firewall notice the remote attempts
to reach the backdoor? Assuming its running openbsd and not also full of
hardware backdoors.

This is one reason I never buy hardware p2p off bitcoin trading sites and
forums, since it would seem logical to target those buyers who may have
stuffed wallets to clean out

~~~
bcoates
I'm sure the use of a funny protocol packet is just to keep the example
simple, making an evil module that opens a tunnel that a DPI firewall wouldn't
notice is routine gruntwork to the kind of attacker that can manage to sneak
an evil PCI device onto your bus.

------
ffrryuu
The CPU from US companies could be hosting a backdoor itself.

~~~
AsymetricCom
It's more likely Intel is hiding a back door in their microcode mechanism,
which has been suspected for some time. AMD publicly denounced having any back
door in their hardware: [http://www.fudzilla.com/home/item/32120-amd-denies-
existence...](http://www.fudzilla.com/home/item/32120-amd-denies-existence-of-
nsa-backdoor)

------
gonzo
Esr needs a new machine.

