Hacker News new | past | comments | ask | show | jobs | submit login
3rd Shellshock Vulnerability Found (seclists.org)
101 points by propercoil on Sept 26, 2014 | hide | past | web | favorite | 42 comments



This is not a vulnerability, it is intended functionality. Some scripts undoubtedly depend on overloading a shell builtin command with a function (even though it is gross).

The post's proposed exploit path is not possible on Linux and most OS's, because you cannot have a setuid script.[1]

[1] http://unix.stackexchange.com/questions/364/allow-setuid-on-...


From [1] "Linux ignores the setuid bit on all interpreted executables"

Not a vulnerability.


Even if you have a setuid script it appears that bash won't import functions in that case.

However the fact that the script isn't setuid isn't really the concern for these shellshock, since a setuid program could indirectly run /bin/sh.

Again, though, this doesn't seem to another shellshock problem. In shellshock you only needed to control the value of an environment variable, not its name. If you can control the name and the value then there are plenty of other ways you can attack programs.


> However the fact that the script isn't setuid isn't really > the concern for these shellshock, since a setuid program > could indirectly run /bin/sh.

A setuid program which runs any child process without having first cleared the environment or authenticated the caller for root access is already broken by means of LD_PRELOAD.


If you're vulnerable to this, then you're vulnerable to someone setting PATH and changing all executables that you refer to.

That's not the "shock" part of "shellshock"; that's a long known issue. Applications which pass arbitrary variables from the network through environment variables already add a unique prefix to the variables, to prevent this kind of problem. That unique prefix also prevents them from accidentally conflicting with the name of an actual executable for this case.

Now, there are still going to be bugs with suid executables not sanitizing their environment, but those apply in many other ways, such as via PATH and LD_PRELOAD.

The "shock" part of shellshock was that it was possible to execute arbitrary code, upon just loading a variable in Bash, regardless of what its name was, depending only on the contents, which in many cases can be attacker controlled.


Exactly. If you are letting an attacker define not just the values of environment variables, but also the names, then you've already lost.

Forget bash. LD_PRELOAD by design lets you load arbitrary code into any child process. Many other environment variables affect libc and other common libraries in arbitrary ways. setuid programs must clear their environment at startup to be safe. Programs like sudo can restore that environment once the user is authorized for root access.


Wow this is getting out of hand, that's not really a security vulnerability anymore. That's how bash passes exported functions, I'll get back to that.

There is some misunderstanding though regarding the suid shell script. I can't think of any modern unix other than solaris that allows those by default any more. It's just not a concern anymore unless an admin twiddles some sysctl or what not. And in the case of solaris /bin/sh is not bash. Also /bin/sh does it's own setuid after comparing to 100 (unless the -p option is passed) so often if people are not adding users to a group or sudo, they would create a ksh or perl script instead. Anyway that's just rambling, sorry, the short answer is that I would be really surprised if there was any modern system that used bash as /bin/sh and also allowed suid interpreter scripts.


Doh, forgot to get back to the first point, sorry.

My thinking is that at this point the whole exporting functions (and anything else other that environment variables, like arrays) is just a bad idea in the default case. I actually do do it, but for a stupid reason and can totally see how I should just put those functions in bashrc for the same effect anyway.

So I really would like to see that all disabled by default in bash and a shellopt added to enable it if anyone really needed it.

I've heard about other ideas like prefix, suffix the name, or add a BASH_FOOBARBAZ that lists exported functions, but it's just not something most people do, and it would be a pita for the people that do need it now needing to change how they do it cause I can totally see people that made use of this having done it in C and perl knowing that if they did added to ENV 'cat=() { ...' or what not it would change what they needed in the subshell after they called system. That would really be annoying for those people to have all that break in a way they would need to change all that baroque code instead of just adding an option, something like system("exec /bin/bash --foobarbaz ... ") now.

Plus there are other programs like sudo that already know how to deal with the '() {' style env vars and clear them-out by default, people would have to update all those (plus what ever ancient suid C launcher programs they already had doing this kind of env var clean-up in their own broken ways from before the time sudo became the way to do this stuff) now too.


Debian has already released a batch patched to only treat BASH_FUNC_FOO() as exported functions.

It's the second half of the security fix, so I recommend upgrading.

There is a small potential for breakage as you mention, but less than ripping the feature entirely out.


Personally I like this approach from netbsd much better, it's not ripped-out, just disabled by default. Though it would be nice if there was a single char shopt so set, $-, and shebangs could all work with it, but then again it might be such a rare (and somewhat questionable) thing to export functions that it's a good idea just to leave it like this with only a longopt. Might force people to think about it some and use /usr/bin/env foo=bar ... - /bin/bash now, or just do the system(exec ... like I had above. But changing the shebang would be simplest, something like #!/bin/sh -g for the scripts that need it. And yes I had to deal with code like this doing exporting functions from perl and C in the past, it was for gross reasons to emulate older systems poorly.

http://seclists.org/oss-sec/2014/q3/755


I should not have replied until I had more time to respond clearly, sorry.

The BASH_FUNC_foo()=() { ... is nice, it's how it should have been from the get go. Besides having a nice prefix that owns them all (what about arrays though)it makes it hard to set an exported function from the shell using = cause of the () suffix. That's clever, promotes the use of export -f basically from bash itself.

Unfortunately it breaks how to do it from everything other than bash cause it's not backwards compatible, and people like me had done that. Now I realize that I had done it for not great reasons with other better ways to do what I wanted. I think most everyone else that had exported functions would have felt the same. So very few people actually need exported functions, most can come-up with a work-around.

Taking that into account what is trying to be accomplished here with any patch? Really it's to prevent a future vulnerability, that should be the goal. Going back, the first vulnerability was that when the shell was initializing more got executed than intended. Then the next (which I can't figure-out how to exploit) was because of state from that initialization being kept around (in this case from an error bailing) and then messing-up later execution.

It's sort of a hackish way to do all this, right? It only took a day for someone to find another bug, I was able to get bash to seg fault left and right when I was looking into it myself before the patches. What is the chance that there is another bug in that parser say? Noby knows for sure.

The netbsd approach goes: Fix the current bugs, because there are so few reasons to export functions, so few people do so, most people can find another approach, and the way that they are implemented might be found vulnerable => do not ever export functions by default. As a courtesy to those that have and that still must, provide an option to allow them and do not break backwards compatibility so those with the most trouble need only add the option instead of changing other complicated squirrelly things.

The redhat approach goes: Fix the current bugs, because there are people that export functions, let us do it in a more sensible way. Now since we have a prefix and suffix, we protect against an attacker being able to create arbitrary function definitions. If an attacker could control BASH_FUNC_foo()=, then they would go after lower hanging fruit anyway. As a courtesy, if the people used export -f, it is compatible.

So it's a philosophical thing. The limitation of the redhat approach is there are people like me that feel there will be more bugs in bash. The parser is complicated, it is being called in this early place, state remains for later, I expect that it will not be too much to get bash to segfault again, and then it's careful analysis to see if payload can be crafted to get it to execute instead of simply seg fault. So I would like a way to have it just not do any of that by default. The redhat approach does not give me a way to do that. I value that more, plus being backward compatible. The redhat approach values having a safer way to pass functions, one that continues to allow that by default, but breaks compatibility and does not allow me to disable it at all.


ha, I forgot one more point! The fact that I have export -f in my bashrc, broke the common later () { a=[><]\ style redirect IO parser exploit. I thought that was funny.


It's actually not "new" it's a "feature" that everybody became aware it existed since long ago only once Shellshock bug became known.

Christos Zoulas from NetBsd has the arguments that I really like: it's a "feature" that was widely unknown and that should be disabled by default:

http://openwall.com/lists/oss-security/2014/09/25/31


Can we go straight to the part where this is disabled behind a --insecure flag and everyone can spend a release cycle migrating untrusted input handling to languages which were designed for it?


> languages which were designed for [untrusted input handling ]

Which languages would those be? There are a few languages with a concept of "taint" (Perl, Ruby), but I've never actually seen a language where everywhere untrusted input could come from actually starts tainted, and every secure function actually disallows tainted input. Especially where third-party-native-FFI library calls are involved.


There's an argument that we should be making an effort to add something like Perl's taint mode (which predates the web by half a decade!) or the stronger variants in some academic languages everywhere but even that's overkill for this class of bugs.

The problem here, as anon1385 pointed out is that shells in general, and Bash in particular, were designed to execute things and maintained at best weak separation between code and data – in no small part because the concept originated 40 years ago when computing was a much more trusted place and network connections were rare and not seen as a conduit for never-ending attacks. As a class, makes shells a bad idea for anything which crosses a privilege boundary; almost any normal language would avoid this particular exploit.


Well it's pretty unusual to write a Java program that accidentally parses, compiles and runs a given string as Java code instead of just treating it as data.

Shells are designed primarily as human interfaces. They try quite hard to execute data they receive, because when used interactively that is what you want. Passing around data that you don't want to be executed involves a lot of careful escaping because everything is done in-band. That's why the history of shell scripting contains countless examples of people accidentally executing file names, abuse of control characters and things like that.


I just tried this in zsh 5.0.6 (x86_64-apple-darwin13.3.0). It reports vulnerable as well.

edit: Yes, this feature works when formatted correctly for zsh as "function ls () { echo vulnerable }". However, I was wrong in that zsh -c will not run the function (of course running "ls" in the same session will). I'm going to call this not a problem.

edit: When trying this one-liner in zsh: "env x='() { :;}; echo vulnerable' zsh -c 'echo hello'" (as suggested by https://superuser.com/questions/816622/does-the-shellshock-b...), the output indicates my shell is vulnerable. Could someone please try and replicate?


Yeah, it's a feature, working as intended.

This is how you define a shell function and then use it in sub-scripts.

As the author noted, using this as an exploit requires control of the variable names, and common tools (httpd, dhclient, etc) that set variables in environment have explicit naming conventions in place to prevent this.

To be clear: I'll change my tune if someone finds a way to exploit this remotely.


Yes, if you have full control over the environment you can make all sorts of havoc ($PATH, $LD_PRELOAD, ...) What made shellshock special is that you only needed to control the value of a variable, not its name.

I don't see how this qualifies as much of a vulnerability. Maybe now that bash's imported-function feature is better known we'll see it leveraged as part of a multi-step attack though.


> I just tried this in zsh 5.0.6

Do you mean that you run bash -c in zsh, or that you run zsh -c ?


Good call. I posted an edit to my original comment. zsh -c won't run the function outside of the current session. The "extra step" needed to add functions to the environment variables seems to make zsh much more secure than bash in the context of this discussion (perhaps?). This SE thread was really helpful to me in clarifying: https://unix.stackexchange.com/questions/33255/how-to-define...


I really don't think it does. zsh doesn't load functions that way at all. Did you copy the bash command when you were testing by any chance?


I posted an edit to my original comment. I meant that the equivalent defining of a function (in zsh) does work, but zsh -c won't run the function unless you permanently add the function to your env.


If this is how passing functions is supposed to work in bash, then why do I get different results, depending on which system I'm on? I've been madly patching my systems since yesterday and on a RHEL 6.5 system I get a different result than from an Ubuntu 12.04 system:

    [RHEL-6.5 tmp]$ uname -a
    Linux hostname1 2.6.32-431.23.3.el6.x86_64 #1 SMP Wed Jul 16 06:12:23 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux
    [RHEL-6.5 tmp]$ bash --version
    GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
    [RHEL-6.5 tmp]$ ls
    file1  file2  file3
    [RHEL-6.5 tmp]$ env ls='() { echo vulnerable; }' bash -c ls
    file1  file2  file3


    Linux hostname2 3.8.0-44-generic #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
    Ubuntu 12.04:~/tmp$ bash --version
    GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu)
    Ubuntu 12.04:~/tmp$ ls
    file1  file2  file3
    Ubuntu 12.04:~/tmp$ env ls='() { echo vulnerable; }' bash -c ls
    vulnerable
Is this a basic difference in the feature sets of Bash 4.1 and Bash 4.2? Like several posters, I was unaware of this feature and have certainly never used it. Perhaps it should be off by defaults? How big of a mess would that make? I would also like to see more distributions use something other than bash as /bin/sh, but that's another discussion, I guess.


You need to use export -f now, the specifics of how functions are exported have changed in the redhat patch, ls='() { ... is not sufficient anymore. Nothing has broken unless you had something calling bash that was not bash and you used this internal to bash implementation hack-like knowledge that 'foo=() {' is how those pre this redhat patch versions of bash did it.


Thank you! I haven't had time to read the documentation for Red Hat's change.


As I mentioned in my other comment[1], I'm not 100% sure, but I think RedHat's patches for the 2nd vulnerability also cover this one.

[1] https://news.ycombinator.com/item?id=8373320


But which behavior is correct? On Ubuntu, ls is redefined, but isn't that what a shell function is supposed to do? If Red Hat "fixed" this, then haven't they disabled a feature? Sorry, not trying to be obtuse...


Good question, I don't know…but looks like mzs's comment answers us both. ;-)


    s/vulnerable/working/
Fixed that for you. :)


If I'm looking at this correctly, it appears that the Red Hat patches aren't vulnerable to this particular vulnerability. (I believe they patched bash differently than upstream, as per https://securityblog.redhat.com/2014/09/26/frequently-asked-...)

EDIT: I tested on CentOS 6.5 and Fedora 18 (for which we'd manually backported the Fedora 19 patches).


I've read the page you linked and it's not clear to me how to reach that conclusion based on that information. Help?


Sorry! My conclusion (i.e. that RedHat was patched against this 3rd vulnerability) was just by running

env ls='() { echo vulnerable; }' bash -c ls

on my recently patched CentOS box. If I'm reading that right, if I was vulnerable, I'd get the output 'vulnerable'? But, instead, I got the correct output of the `ls` command.

The linked article was pertinent because RedHat did patch bash in a different way than upstream. Here's the relevant quote:

> Our patch addresses the CVE-2014-7169 issue in a much better way than the upstream patch, we wanted to make sure the issue was properly dealt with.

That help?


Yes, it helps, thanks!

And the post below which shows the contents of the RH patches should help, if I knew how to read it. :)

But... it seems to me that if this doesn't work:

    env ls='() { echo vulnerable; }' bash -c ls
...then some scripts are now broken. Sorry, I don't know which ones. Needless to say, most bash scripts in existence don't take input from the wild wire. A sad day for those innocuous scripts.


mzs had some info about this in a separate comment[1] if you missed it. I've hit the edge of my knowledge about it. :-)


I don't use redhat, but my hunch is that the redhat patch is much more like Florian's (redhat security person):

http://seclists.org/oss-sec/2014/q3/693


Here is what has landed in Fedora, the fix is three patches:

http://pkgs.fedoraproject.org/cgit/bash.git/commit/?id=6319f...


If you're able to set arbitrary environment variables, then you could also emulate the prefix and suffix. e.g. this works even on patched F20:

    $ env 'BASH_FUNC_ls()=() { echo myls; }' bash -c ls
    myls
But as others mention, this is not really a bash issue anymore, rather a general problem of sanitizing environments.


Definitely works on Ubuntu and I'm assuming Debian as well.


I know this is not really an issue but is there any use for this "function exporting" feature in the first place? I had never heard about it before and when I did it kind of sounded like an ugly hack.


I was not able to replicate this on Amazon's linux distro after the second patch was applied. After only the first patch, though, the server was still vulnerable.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: