
Rooting Out Malware with a Side-Channel Chip Defense System - nsshey
http://spectrum.ieee.org/riskfactor/computing/hardware/rooting-out-malware-with-a-sidechannel-chip-defense-system
======
p4bl0
That is very interesting! I worked on a similar idea (but on the malware side…
:-p) a few years back. The idea of our paper [1] is that if programs could run
and at the same time produce a functionally equivalent to themselves program
that uses different instructions (to do the same thing), then power
fingerprinting these programs would be very hard, as each execution's power
trace would be different. We implemented a prototype of the idea (in Lisp, for
simplicity) which can take any program and outputs an equivalent one which
uses a Quine-like [2] construction to execute the original program and at the
same time output a new different version of itself (but the size of the
obtained code was quickly exploding in our proof-of-concept prototype, so it
was not very real-world).

[1]
[http://pablo.rauzy.name/research.html#wistp11](http://pablo.rauzy.name/research.html#wistp11)

[2]
[https://en.wikipedia.org/wiki/Quine_%28computing%29](https://en.wikipedia.org/wiki/Quine_%28computing%29)

~~~
strgrd
I'm sure it won't be long before malware creators find a way to obfuscate
power profiles.

How do you visualize the power profile of a specific program (can you)? Could
you use a simple power analyzer to potentially detect malware, or do you need
something more refined?

~~~
fabulist
This project takes a whitelist approach, rather than a blacklist approach.
Malware authors could certainly turn their programs into random GOTO spaghetti
in order to obtain a unique power consumption pattern. But this approach is to
fingerprint the pattern of normal operations, and raise an alarm when the
device starts doing abnormal things.

------
fabulist
Some thoughts on how a hacker might defeat this;

* Broadcast signals at the PFP enabled devices to cause false alarms, in order to convince the operators that the devices are unreliable.

* Hack the system at a time where the administrators are performing an "unusual" operation (such as installing a firmware update).

* Write your exploits such that they establish a minimal beachhead in the hacked daemon faster than would cause alarm (the article said it could detect malware in "milliseconds" \-- you can do a lot in a millisecond), and then proceed very, very slowly. The PFP would still detect your malicious activity, but you would keep the signal/noise low enough that an alarm wouldn't be raised.

* In some circumstances it may be possible to achieve exploitation while restricting yourself to "usual" activity. Turning a switch into a hub would raise alarms; enabling a disabled ethernet port probably wouldn't. Similarly, using functionality which is already included on the device (such as functions in libc) would probably go unnoticed.

This is, of course, pure speculation.

------
api
"All malware, no matter the details of its code, authorship, or execution,
must consume power. And, as PFP has found, the signature of malware’s power
usage looks very different from the baseline power draw of a chip’s standard
operations."

I stopped right there because I smell assumptions that any number of things
might violate: background tasks, VMs, VPN software, background P2P networking
software (e.g. belonging to a Kad network), auto-updaters, debuggers, system
profilers, management software for IT asset control, etc.

That and malware authors would immediately set about working to make their
power profiles more typical. What they've found is that _current_ malware
produced with no attention to obfuscating this aspect often looks different
from _current_ typical application code.

Can't see this being robust at all. Sounds like a whole pile of false
positives too. I suppose something like this could be used as part of an array
of detection methods deployed to devices to help monitor for suspicious
_changes_ in baseline activity that could then be investigated.

~~~
warkdarrior
Indeed, the very next paragraph lists limitations:

" _Computers with users regularly attached to them, like laptops and
smartphones, often have no baseline routine from which abnormal behavior can
be inferred._ [emphasis mine!] So, PFP officials say, their technology is at
the moment better suited to things like routers, networks, power grids,
critical infrastructure, and other more automated systems."

------
userbinator
There is no mention of false-positive rates, one of the things about
traditional antimalware software that tends to worry me a bit. Even "for low-
level systems that follow a predictable and standard routine", things like
spikes in load could look unusual.

Using "side channel" hints is actually what (some) humans do to suspect
malware infection too - unusually large amounts of HDD or network activity
while the system is idle, or CPU fans running much more than normal, tends to
raise suspicion.

