
 Remotely Attacking Network Cards (or why we do need VT-d and TXT) - wglb
http://theinvisiblethings.blogspot.com/2010/04/remotely-attacking-network-cards-or-why.html
======
tptacek
"Advanced" network cards support IPMI with a protocol called RMCP, which runs
over IP and can be delivered remotely. The cards implement IPMI/RMCP with an
RTOS running on embedded RISC CPUs. Like every card with an embedded RTOS-
running processor, they're coded in C.

There are two things that fall out of this.

First, when you write a protocol stack in C, you create memory corruption
flaws. It's hard to name any piece of C code that has avoided this problem.
Microsoft spends tens of thousands per dot release to have some of the best
testers in the industry fuzz bugs out, and they still slip up. Dan Bernstein
managed to let an LP64 overflow slip into qmail.

So, thing one: the trend towards more advanced network cards (and storage
processors and motherboards and offload boards and etc etc etc) moves us to a
place where our underlying hardware is vulnerable to software flaws, even if
our operating systems and application code is extensively assured.

Secondly, the x86 security model moving forward is based on the idea that the
chipset can assure that known-good code is running, and that the known-good
code can use new chipset features to sandbox code, either in VMs or with
runtime protection features. This model was designed _mostly_ to defend
against attacks originating from application and OS flaws.

But that model fails badly when the assumptions it makes about attack vectors
fails. So, for instance, if you can take over the RTOS running on a network
card, then however much Intel and AMD have planned to eventually deal with IO-
level attacks, the systems deployed today get trounced by the DMA controller.
Right now, if you can program the DMA controller, you win.

That's (I think) what Rutkowska has been saying for the past several years.
She's right, of course. But the reality is that anything we do to address this
problem architecturally is going to be unsound for years to come. So the
immediate thing we need to do is test our complex hardware to raise the cost
for attackers of discovering and exploiting these kinds of vulnerabilities.

~~~
stcredzero
What about using a language immune to buffer overflow for the embedded
controller? I know such languages can be made. (For example, one could
implement a Smalltalk using formal methods, with an eye towards eliminating
buffer overflows.) I'm not sure there currently exists a language suitable to
implement such embedded controllers.

(Smallest Smalltalk image I know of was 45k. Squeak can be stripped down to ~
350k, which is about half the size of Perl's runtime footprint in the 1990's.)

~~~
tptacek
The people implementing these images don't even care enough to have their code
reviewed (the vulnerability here appears to have been trivially fuzzable).
They aren't switching to Smalltalk to deal with a problem they haven't even
considered.

~~~
stcredzero
I'm not advocating that. I'm just pointing out that secure languages for
embedded programming must be possible.

------
locacorten
Here's what's going on at a high-level.

1\. Somebody discovered a security vulnerability in the implementation of a
network protocol (i.e., a bug). The bug can be exploited remotely apparently.
Which means that somebody from far away can break into your machine if your
software has this bug.

2\. This was used by someone else to push forward their agenda. In particular,
their point is that newer dynamic root of trust trusted technologies are
better than older static root of trust technologies. The goal of these
technologies is to make a piece of code execute securely (i.e., without being
compromised or modified by an attacker). Static root of trust can make a piece
of code execute securely only by trusting _the entire_ software stack from
boot time until the execution of the piece of code. Dynamic root of trust
bypasses this entire software stack -- it allows you to just verify that the
piece of code hasn't been modified before it's being executed.

3\. Now, the bug from step #1 can (in theory) be used for someone to "break"
into the persistent storage of the NIC and compromise the NIC "forever" by
changing its firmware to a "bad" firmware. This won't be caught by the static
root of trust technology because such technologies do not typically check the
firmware of a NIC card at boottime. And thus, in theory, the dynamic root of
trust is "better" because it doesn't rely on making sure that the entire
software stack remains uncompromised.

Now .. my opinions.

a. Remote vulnerabilities are very problematic because they lead to remote
exploits. The lesson here -- get very experienced/senior/skeptical designers
to implement networking protocols. Here's where it's worth hiring the smart
guy. Implementing a new protocol in C from scratch is crazy. I'd fire this guy
if he worked for me.

b. Dynamic root of trust is better than static root of trust on paper. In
practice, dynamic root of trust is very hard to implement in a way that stays
secure while doing something useful. When executing in "secure" mode with
dynamic root of trust, you cannot use interrupts which basically means that's
almost impossible to do anything useful (like sending or receiving a network
packet).

------
kevindication
Is there a "tl;dr" equivalent for an article that's completely buried in
initialism soup?

~~~
viraptor
tl;dr - Nothing is secure. Even Intel's specially designed Trusted Execution
Technology (close to the Trusted Platform idea) has known flaws. You can be
hacked at levels which you cannot control (firmware). It's tricky (not a
script-kiddie level exploit), but possible and many existing holes are not
published/known, because researchers would rather do something interesting
than uncover yet another bug using the same technique. If you have government-
level influence, start complaining to Intel (et al.).

