
Fun with Shellshock - luu
http://blog.regehr.org/archives/1187
======
mabbo
tl;dr: This software immediately recognized Shellshock for what it was,
modified live processes to protect itself, then wrote a patch and re-compiled
bash in a few minutes... all with a single malicious request.

This sounds like science fiction. I love it.

~~~
junto
I have to be honest; at first I genuinely thought it was science fiction. It
sounded like a William Gibson novel.

------
cookiecaper
While this is cool, if it can block the attacks, I think I'd rather leave it
at that than have it guess at a possible fix and automatically recompile
applications. That's definitely a cool experiment, but in real life I'd be way
too nervous to allow software to make those calls on its own. If there was
something it could subscribe to and receive human-vetted patches for a certain
attack signature and _then_ automatically recompile, that'd be pretty cool and
feel a lot safer.

~~~
segmondy
stop being afraid of the future, this is experimental technology, and one day
it will be the present, and normal. a vulnerability is not fixed till it's
patched, if you tried to access /etc/foo, A3 will recognized you don't have
access and block it, but if you have access to /usr/bin then it won't block
it, an attacker then can craft an exploit to only use what they know you have
access to, say netcat in /usr/bin or wget to get a shell and carefully try to
bypass your other security layers.

~~~
xg15
While I agree that the technology is impressive I personally don't want to
wade through codebase full of machine-generated patches. Making sense of code
when the original author is not available is bad enough, I don't want to know
how bad it gets when there is no author.

~~~
bigiain
I've got an old-ish SVN repo at work with a ~115k loc Angularjs app, which is
the end result of some very poorly thought through decisions to "allow all
merges". It "works", but it's now completely unmaintainable (and has been
since rewritten from scratch and the lowest effort way to add new features).
It's a real shame, because the original dev (who wasn't involved in the
botched merge) was actually doing some really good work, but the business
kinda let him down with angular training for other staff, pulled him off that
project, then let it crash and burn while not explicitly blaming him but
making it clear they didn't blame anyone else... Not our proudest moment...

------
azinman2
So impressive, although it's source code patches can likely disable needed
functionality, and in the process allow a 0day to turn into a denial of
service attack.

~~~
akx
DoS is preferable to RCE though, in my opinion...

------
contingencies
This is quaint but looks too much like "look how I solved this" (PS. give me
funding) multiple weeks after the fact, and is definitely coming from such a
bureaucratic environment. The overall approach leaves the critical question of
service-specific security policy development out of scope.

We've had NSA SEL since 2000, the reason it's rarely used is that regular
people don't have time to grok that level of detail. In response, we have
things like _grsec_ learning mode.

As with any time app-specific ACLs are brought out, the _real_ question here
is process, not technology. I would argue that what is really needed is not a
kernel-specific solution or a protocol-specific solution (noticed how many of
the new SOA infrastructure half-solutions are HTTP-only or port-level only
regarding network policy?) but rather a generic, industry-wide approach toward
multi-faceted security policy generation for arbitrary services including all
aspects of service behavior (at both host and network levels). That in turn
requires a virtualization paradigm, networking paradigm and OS neutral service
devops process. This is something that we are moving towards slowly (eg.
widespread _git_ use, standardish build tools, containers, VLANs), but is not
widely discussed.

We have the tools in major kernels already: syscall monitoring, relatively
mature multi-subsystem security policies, network ACLs and monitoring systems,
increasingly sophisticated networking virtualization solutions like Open
vSwtich, filesystem-neutral monitoring. The pain in the ass is putting it all
together in an average-app grokkable manner that doesn't demand comprehension
of low-level OS internals from regular developers, nitpicking application
behaviour comprehension from operations infrastructure, or the employment of
security nerds to reap the benefits of some of the available lower-level
technologies.

I fear commercial offerings will never take us to this position: it's simply
not an easy sell (intangible, long term benefits vs. shorter overall timeframe
and less cognitive overhead / requisite management grok for current and
familiar development processes) and too complex to implement .. in most cases
requiring a complete devops process change. Instead, I predict that we will
slowly see open source devops tools layer upon RCS/VCS to provide a common
continuous integration / deployment process that integrates effective multi-
subsystem application profiling for security policy development in parallel to
regression tests and other pre-deployment processes.

Further personal thoughts on this area @
[http://stani.sh/walter/pfcts](http://stani.sh/walter/pfcts)

~~~
ssclark
I am one of the developers, so I can offer some small clarifications. While it
was not clear from the writeup (and when it got posted) this experiment was
run 2 days after the bug disclosure and well before patches started to
stabilize. The post had to go through a review before we could publish it.

The A3 stack relies on syscall monitoring (via virtual machine introspection),
network filters (which are protocol specific) and filesystem-neutral
monitoring. Some of this stuff is readily comprehensible to a sysadmin, other
stuff is not. Automatic application profiling is an area of ongoing work for
the project.

~~~
drvdevd
This is very interesting work. Thanks for publishing it. One thought
experiment for you (which perhaps you've discussed already): could an attacker
potentially influence and predict the state of patched software on the target
system, introducing vulnerabilities which did not exist prior to patching?
Also along that line, have you attempted to fuzz the input fields in scenarios
such as your shellshock example?

~~~
ssclark
That thought experiment falls under the umbrella of adversarial machine
learning, which is something that we are aware of but has not been a focus for
us thus far. Getting the correct adaptation in the first place was the primary
goal. To trigger adaptation/patching, an attacker needs to drive the protected
application to an undesirable state (exploit it, in other words), so an
insidious attack that predicted and triggered multiple patches in the name of
creating some ultimate vulnerability is a pretty high bar to clear. I would
not claim it is impossible, but I do not know under what conditions that path
would ultimately be easiest for the attacker.

We have done some work with fuzzing malicious inputs to produce better network
filters, but that work focused on integrating a 3rd party fuzzer:
[https://dist-systems.bbn.com/papers/2013/Automated%20Self-
Ad...](https://dist-systems.bbn.com/papers/2013/Automated%20Self-
Adaptation/A3_SASO_WS_2013-CR-Final.pdf)

