
Glibc getaddrinfo stack-based buffer overflow - 0x0
https://googleonlinesecurity.blogspot.com/2016/02/cve-2015-7547-glibc-getaddrinfo-stack.html
======
jimrandomh
> "The glibc DNS client side resolver is vulnerable to a stack-based buffer
> overflow when the getaddrinfo() library function is used. Software using
> this function may be exploited with attacker-controlled domain names,
> attacker-controlled DNS servers, or through a man-in-the-middle attack."

> "The vectors to trigger this buffer overflow are very common and can include
> ssh, sudo, and curl. We are confident that the exploitation vectors are
> diverse and widespread; we have not attempted to enumerate these vectors
> further."

> "Remote code execution is possible, but not straightforward. It requires
> bypassing the security mitigations present on the system, such as ASLR."

It is time for the bimonthly Internet security meltdown. Again. When they say
that "exploitation vectors are diverse and widespread", they really mean it.
Patch ASAP. This is a race; it is only a matter of time before criminals start
automatically and systematically scanning every server on the internet for
this, and you really really want to be patched before that happens.

Thinking a bit more long term, it's pretty clear at this point that we need to
expunge all C language networking code from the world, replacing it with Rust
or pretty much anything else. That's not sufficient by itself, but it is
necessary, or else the periodic Internet security meltdowns won't ever stop.

~~~
djsumdog
All this "Let's replace everything with Rust" is extremely childish.

Be wary of making programming languages a religion. The Java zealots did it,
and you can see the results where full Java projects are used in places where
a 20 line Python script would have been easier.

If you're serious, one of your pet projects should be trying to implement BSD
sockets with Rust. Then try compiling some C apps against your implementation.
Then do some performance tests showing memory and time benchmarks of your Rust
vs libc implementation. Do a blog post on it. Then we'll take you seriously.

~~~
AsyncAwait
The parent actually said "Rust or pretty much anything else", which means any
memory safe language. It just happens that Rust came to his/her mind as the
first choice probably because it's been talked a lot about lately on HN and
memory safety is its explicit focus.

I see no "programming languages a religion zealotry" in parents post, but I
see a lot of bitterness in yours. Perhaps it's time to reflect.

~~~
reality_czech
There are tons of vulnerabilities discovered every month. Most of the them are
in Java, PHP, javascript, etc. software because that's what the majortiy fo
new code is written in. And those languages have features that can lead to
security holes, which C doesn't have. For example, PHP supports eval(), JS has
cross-site scripting attacks, Java has classloaders, etc. etc. But somehow
only the vulnerabilities in C language software get programming language
zealots exited. So maybe you should reflect.

~~~
dh997
Oracle JVM/OpenJDK and .net CLR are deployed in production at basically every
profitable company on Earth.

PHP... Is deployed because it's perceived to be easier and friendlier than
other religions, and so wins with that popularity war (as MySQL did). Facebook
HHVM is another approach. Folks know the likely warts and mitigate to shrink
the attack surface by defending deeply from front layers down to backend
services.

Postgres was harder to use in the 2000's despite having a clean codebase but
got mucb easier to use, in large part due to MySQL, while leading on features
inspired by Oracle DBMS and more recently NoSQLs with hstore.

Attacking popularity for what gets the job done is moot because defense is
never ending vigilance for anything real.

Perhaps the focus should be on starting to formally-verifying core libs like
zlib, OpenSSL, OpenSSH (portable), glibc, etc. for correctness and resilience
against side-effects and ABI promises.

------
ptrincr
Redhat (RHEL5 unaffected) -
[https://access.redhat.com/security/cve/cve-2015-7547](https://access.redhat.com/security/cve/cve-2015-7547)
\-
[https://access.redhat.com/articles/2161461](https://access.redhat.com/articles/2161461)

RHEL6 -
[https://rhn.redhat.com/errata/RHSA-2016-0175.html](https://rhn.redhat.com/errata/RHSA-2016-0175.html)
\- update to glibc-2.12-1.166.el6_7.7.x86_64.rpm

RHEL7 -
[https://rhn.redhat.com/errata/RHSA-2016-0176.html](https://rhn.redhat.com/errata/RHSA-2016-0176.html)
\- update to glibc-2.17-106.el7_2.4.x86_64.rpm

Debian - [https://security-
tracker.debian.org/tracker/CVE-2015-7547](https://security-
tracker.debian.org/tracker/CVE-2015-7547) Use "aptitude show libc6" \- needs
to be 2.19-18+deb8u3 (jessie), 2.21-8 (sid)

Ubuntu - [http://people.canonical.com/~ubuntu-
security/cve/2015/CVE-20...](http://people.canonical.com/~ubuntu-
security/cve/2015/CVE-2015-7547.html)

SUSE -
[https://www.suse.com/security/cve/CVE-2015-7547.html](https://www.suse.com/security/cve/CVE-2015-7547.html)

Interesting to note this tip:

 _While it is only necessary to ensure that all processes are not using the
old glibc anymore, it is recommended to reboot the machines after applying the
security upgrade._

From - [https://lists.debian.org/debian-security-
announce/2016/msg00...](https://lists.debian.org/debian-security-
announce/2016/msg00051.html)

Therefore at the very least you will need to restart anything which depends on
glibc. This should give you a list of packages:

    
    
      lsof | grep libc | awk '{print $1}' | sort | uniq

~~~
bits
Ubuntu - [http://people.canonical.com/~ubuntu-
security/cve/2015/CVE-20...](http://people.canonical.com/~ubuntu-
security/cve/2015/CVE-2015-7547.html)

~~~
collinmanderson
Ubuntu Trusty 14.04 user here. Am I correct that there's no patch out yet?

~~~
collinmanderson
Update: It looks like the patch is finally out.

~~~
atom_enger
What are you doing to pull this patch? I've apt-get updated and still am
seeing Version: 2.19-0ubuntu6.6

~~~
ptrincr
Does appear to be out -
[https://launchpad.net/ubuntu/+source/eglibc/2.19-0ubuntu6.7](https://launchpad.net/ubuntu/+source/eglibc/2.19-0ubuntu6.7)

apt-get update not seeing it at the moment though, guess its the mirror I'm
using.

Edit - working now. I can see:

    
    
       Version: 2.19-0ubuntu6.7

~~~
atom_enger
Yep I'm seeing it too. Thanks! Ubuntu just released this:

[http://www.ubuntu.com/usn/usn-2900-1/](http://www.ubuntu.com/usn/usn-2900-1/)

------
verytrivial
An extra gold star for the valuable comments added by the patch!
[https://sourceware.org/ml/libc-
alpha/2016-02/msg00416.html](https://sourceware.org/ml/libc-
alpha/2016-02/msg00416.html)

------
Munksgaard
Can we agree that it's urgently necessary to rewrite most of the core
Linux/OSS stack in memory safe languages? Exploits like this come up all the
time, and we know how to completely eliminate them. I don't care if it's Rust
or D or Go or Haskell or OCaml or anything else, as long as it's not C. The
sooner we do this, the better.

~~~
sargun
I'm still unsure why people start projects with C, or C++ (or other unsafe
languages), that do not need it. I totally understand that there are use cases
where C++ and C make sense.

In my mind, these use cases are quite limited though - integration into 3rd
party libraries, legacy codebases, and working in embedded environments. I
also agree that performance can be a reason, but see Knuth's opinion on
premature optimization.

I also think that when writing C, and C++ it makes a lot of sense to restrict
oneself as much as possible a la the kinds of standards in the Google Style
Guide, or NASA's style guide.

Can people tell me why they start projects in broad C++, and C still? I may be
completely missing some piece of the puzzle.

~~~
autoreleasepool
> Can people tell me why they start projects in broad C++, and C still? I may
> be completely missing some piece of the puzzle.

Sure.

\---

 _Compelling features:_

\- Unrivaled portability (yes, contrary to folklore C - and to a lesser extent
C++ - is more portable than Java)

\- Excellent libraries

\- Excellent documentation and learning resources

\- Unrivaled tooling

\- perfect mapping to how the computer works

\- Unrivaled performance (other than Rust)

\- exceptional, engaging, and intelligent community (specifically referring to
the cpp community)

\---

 _Why start a C++ project today?_

\- you want to write a web browser

\- you want a platform agnostc GUI

\- you want to learn OpenGL or Vulkan

\- you want to use Unreal and you're not a visual learner

\- you're want to write the business logic for a cross platform application
and share as much code as possible

\- you're implementing you're own programming language

\- your implementing an algorithm or data structure you've been studying

\- you're bored

\- you like programming in C++

\- you want to learn template meta-programming

\- you want to learn functional programming

\- you want to learn how to write a device driver

\- you need the performance

\- you are not writing mission critical software (aka 99% of all side
projects)

\- you're a curious person

\- you're a contrarian and everyone you know uses Java

\- you want to become a more well-rounded engineer

\- you want to learn about the ins and outs of language design and features

\- you want make a lot of money

\- you only know C++

Those are a few reasons. Hope that helps you solve your puzzle.

PS: Do the anti-C people realize that a kernel in Rust/Nim/whatever would be
under a big unsafe block anyways? Yeah, you have to program "unsafe" code
eventually. Those nice high level abstractions are not omnipresent, you know.
They need something to abstract.

~~~
steveklabnik

      > a kernel in Rust/Nim/whatever would be under a big unsafe block anyways?
    

Only portions of it do. The whole point of unsafe is that you can encapsulate
it, and then go from there.

~~~
autoreleasepool
Definitely. The ability to encapsulate code that is less strictly checked by
the compiler is what makes Rust such a compelling alternative to C. I
definitely think Rust is the ideal choice for a hobby kernel from scratch in
2016.

My point is that unsafety is unavoidable. You can encapsulate unsafe Rust or
wrap plain old C, it's, for all practical purposes, the same. At that level
language features matter a lot less. I agree we should minimize and
encapsulate loosely checked code as best we can.

~~~
steveklabnik
Yeah, absolutely. Ultimately, the machine is not safe. Such is life.

~~~
bcook
Exactly... and users are even worse.

------
cnvogel
As far as I see the bug primarily lies within this function here...
resolv/res_send.c
[https://github.com/bminor/glibc/blob/master/resolv/res_send....](https://github.com/bminor/glibc/blob/master/resolv/res_send.c#L952)

Lines 952 ... 1389 [=~450 lines of code], with more than a dozen of variables
holding random state. Think about the complexity you have with all the
conditionals and loops, often copying and pasting similar conditions with (xx1
&& xx2) variants.

While discusions about the relative merits of Rust, C, Ocaml, Intercal are
fun, with enough dedication you can write unauditable/unreviewable code in any
language. Even though you might avoid memory corruption, you still can't prove
that such kind of code does anything correct.

~~~
cmrx64
You sure can prove that any correct code does something correct, but you can
_never_ prove something about a program outside of a machine-checked formal
proof.

~~~
nly
The problem is that it's far more difficult to prove what code _doesn 't do_
than what it does. I can write a function to do X, Y and Z, and I can show,
through observation and aggressive testing, that it does indeed do X,Y and
Z....but its a _lot_ harder to prove that it _only_ does X,Y and Z and nothing
else. Memory safety is just one expression of this problem.

~~~
cmrx64
In the typical framework (refinement), if you prove your function does X, Y,
and Z, then it does exactly X, Y, and Z. Please don't take my comment out of
the context of formal, machine-checked proofs. It is hard, and that's my
point.

------
alkonaut
Criticism of C aside, when you use a language that isn't very expressive and
where it's easy to shoot yourself in the foot, you need to keep it very neat.

I mean just look at this
[https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/pos...](https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/posix/getaddrinfo.c)

That size of methods, huge macros, the low ratio of comments to (non-obvious)
statements etc.

I know it's easy to criticize very old and tested code, but there is no (and
never was any) excuse for code like that.

~~~
AndyKelley
Compare to the musl implementation: [http://git.musl-
libc.org/cgit/musl/tree/src/network/getaddri...](http://git.musl-
libc.org/cgit/musl/tree/src/network/getaddrinfo.c)

~~~
alkonaut
It's better. The musl version is slightly smaller and tidier but also doesn't
handle all the same proticols so it's not a fair comparison. At least it
doesn't have a lot of conditional compilation and macros.

But the musl code too has bloody ZERO comments for a nearly 100 line long
method of dense/opaque C code! Why? Is commenting and splitting code into
functions frowned upon in "traditional" systems programming?

Who is it that writes code like this? Why isn't code like this rejected by a
maintainer with "split this into at least smaller 5 functions and comment
anything non-obvious"?

Edit: looking again it's actually not _that_ bad, it does have two lookups
already broken out and a lot of the verbosity is just field assignment. Would
be happy with a dozen lines of comments and a split into 2-3 functions.

~~~
AndyKelley
I feel like comments are extremely valuable in API documentation, but not so
much in the implementation.

When you're reading an API you want to understand how it works without
cracking open the source. When you're looking at an implementation it is your
duty to load the source into your comprehension, at which point comments
become irrelevant, or worse, misleading.

~~~
alkonaut
Here is how much comments is actually needed:
[https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=commitdi...](https://sourceware.org/git/gitweb.cgi?p=glibc.git;a=commitdiff;h=b995d95a5943785be3ab862b2d3276f3b4a22481)

------
IgorPartola
This is why I enabled automatic security updates on all the machines I
control. I'd rather get a monitoring alert that something is broken, than to
find out much later that someone rooted my server.

~~~
drummer32
Unless you automatically restart every service that is linked with a library
when you update it, you are still vulnerable.

------
mwcampbell
The Debian glibc package update that fixes this vulnerability is dated
February 11. But the patch wasn't posted on the glibc mailing list until
today. So did Debian get the patch even before it was made public upstream? If
so, then why didn't Ubuntu get it early as well?

~~~
stevekemp
Sufficiently "bad" issues are disclosed to multiple Linux vendors at the same
time, via the vendor-sec mailing list. If you're a member of a security team
for a distribution this is generally where you hear about issues.

From there coordinated release dates will be agreed, and barring big surprises
like one distro pushing out an update too soon everybody _should_ release at
the same time.

Sometimes people get sick, take a holiday, or drop the ball for other reasons.
I'd expect Ubuntu did get notified, but haven't yet tested/built their fixed
packages to push out.

------
spyrosk
So can someone ELI5 how bad this is?

From what I'm reading this should only affect systems that use a compromised
DNS server or in a MitM attack scenario. Which is serious but not so easily
exploitable (I think).

~~~
peterwwillis
If you request a network connection to an attacker-controlled host, your
network software may try to resolve the attacker's host name. The DNS NS
record of their domain may then specify your resolver directly look up the
record using the attacker's own name server, meaning you are directly doing
DNS queries against the attacker's NS.

So in theory, all you need to be exploited is to connect to a compromised host
and resolve its hostname.

~~~
thrownaway2424
It could be even worse than that. If the attacker tries to connect to you,
your server may try to reverse their IP for logging, and the attacker can
control the PTR record. Or the attacker could send you an email that's
guaranteed to bounce, and they control the return path that your mailer has to
resolve.

------
tasqa
Looking for a quick mitigation technique before patches start rolling out...
Would it be wise to limit responses to 512 bytes so the payload cannot be
loaded?

Configuring BIND to use a specific buffer size (only for BIND 9.3.2 and
newer):

Add the following line to the "options" section of your named.conf file:

edns-udp-size: n

Configuring Unbound to use a specific buffer size:

Add the following line to the "server" section of your unbound.conf file:

edns-buffer-size: n

source: [https://labs.ripe.net/Members/anandb/content-testing-your-
re...](https://labs.ripe.net/Members/anandb/content-testing-your-resolver-dns-
reply-size-issues)

~~~
sveiss
I haven't tested it, but I'd expect not. Given a limited EDNS0 buffer size,
UDP responses would either come back with a truncated flag or not at all.
Either case would trigger a TCP retry, and the bug can be triggered that way.

------
friendcomputer
If this was originally filed on an open bug tracker in July 2015, what were
the glibc team doing in the mean time? The Google post indicates they were
"working on it" when Google got in touch. How much work was going on, exactly?
How did this languish for so long?

------
arielb1
That's what you get when you have thousands of lines of code with variable
names like `thisanssizp`. glibc should die in a fire.

~~~
the_why_of_y
The DNS code in resolv/ is not formatted according to GNU coding style - you
can tell by the sane placement of braces.

It's a fork of an ancient version of BIND.

    
    
      Starting with version 2.2, the resolver in the GNU C Library comes
      from BIND 8. 
    

[https://sourceware.org/git/?p=glibc.git;a=blob;f=resolv/READ...](https://sourceware.org/git/?p=glibc.git;a=blob;f=resolv/README;h=dbb15108ae025e2896c0dc1d9161f14d6e05d330;hb=HEAD)

------
fpoling
If alloca could be used to get arbitrary-sized buffer the bug would not exist.
Another Y story
[https://fosdem.org/2016/schedule/event/ada_memory/](https://fosdem.org/2016/schedule/event/ada_memory/)
points out that Ada does not have this limitation. There it is a job of the
compiler to allocate bug chunks outside of the stack not to cause stack
overflows. C really needs such API.

------
titzer
Buffer overrun on the stack...this makes me sad. It's 2016.

~~~
rcurry
"There are two things I am sure of after all these years: there is a growing
societal need for high assurance software, and market forces are never going
to provide it."

\- Earl Boebert

------
btrombley
Can someone please explain the fix in practice? Is it as simple as upgrading
glibc (and eglibc?) on all servers? Or is there a network change I should
immediately change?

~~~
mocko
Yes, patching your OS (e.g. an apt-get update + apt-get upgrade) then
rebooting once the vendor has released patches should be sufficient.

------
jvehent
iptables -t filter -A INPUT -p udp --sport 53 -m connbytes --connbytes 512
--connbytes-dir reply --connbytes-mode bytes -j DROP

iptables -t filter -A INPUT -p tcp --sport 53 -m connbytes --connbytes 512
--connbytes-dir reply --connbytes-mode bytes -j DROP

~~~
superkuh
Thanks for this. It'll do for my router until a new firmware is out. I tried
the above but --connbyte takes a range as input not a single value. Like,
"512:8192" or "512:" for "greater than 512". In practice "512:" is interpreted
as "512:4294967295". So it ends up looking like,

iptables -t filter -A INPUT -p udp --sport 53 -m connbytes --connbytes 512:
--connbytes-dir reply --connbytes-mode bytes -j DROP

iptables -t filter -A INPUT -p tcp --sport 53 -m connbytes --connbytes 512:
--connbytes-dir reply --connbytes-mode bytes -j DROP

------
efuquen
Can someone give me technical reasons why this world isn't possible:

Parts of the linux kernel or glibc or any other critical C code gets replaced
by rust code little parts at a time, which is also calleable from C
([https://doc.rust-lang.org/book/ffi.html](https://doc.rust-
lang.org/book/ffi.html))? That way these libraries could be made safer in a
controlled and incremental manner.

And to reiterate I'm asking for _technical_ limitations, not political or
dogmatic.

~~~
DSMan195276
I thought about this before, since it seemed like a good idea to me too. It's
not that simple though, there's tons of problems you'd hit and the result
wouldn't be what you want. The bottom line is that Rust is simply not a drop-
in replacement for C:

You can't just pass a Vec<> to some C code and expect it to work. The reality
is that you're going to end-up writing a lot of C-like Rust - Which either
acts on C types directly or converts them to Rust types and then does things
to them and converts them back. Either way, the safety is largely lost due to
this because C types are not going to be safe.

You also can't use any inline C functions or preprocessor macros in your Rust
code, meaning that the interop between C and Rust isn't really that good - FFI
lets you call _functions_ , but functions only make-up a part the C API. The
rest would have to be duplicated in Rust and kept up-to-date, which is a huge
development burden and very error prone.

And when you're finally "done" and all the C code is gone, what you're left
with is just a lot of C-like Rust, communicating with each-other through the C
API using C types with unsafe code everywhere. Essentially C but in Rust form.
You'd have to do a ton of refactoring to turn this into anything like
idiomatic Rust, and you can't do that refactoring till all the C code
interfacing with that Rust code is completely gone so you can stop supporting
interop with C.

The bottom line is that it you _could_ do it, but it won't work well because
it's missing the big picture - If you don't write Rust code like Rust, then
you don't get the safety guarantee's, and you can't do that if you're trying
to recreate a bunch of C interfaces in Rust. C interfaces are _not_ safe in
the way that Rust would want them to be safe. It's the same reason why you
can't write a converter to convert from C to idiomatic Rust - You have to
design the system differently to get the gains from Rust, which is a non-
trivial thing to do. Rewriting small parts in Rust isn't going to result in
the design differences that you need, and the interop between Rust and C isn't
very good, leading to lots of problems while you're dealing with both
languages in the same code base.

~~~
steveklabnik
While this is true in some sense, there are also advantages. A ruby gem
written in Rust via C FFI was one of the earliest production uses of Rust; the
Skylight people still thought (and think) that it was very, very worth it.

[https://www.youtube.com/watch?v=2BdJeSC4FFI](https://www.youtube.com/watch?v=2BdJeSC4FFI)
is a further exploration of this idea and how it works. Yes, you do need that
small layer, but future libraries will make it even easier. There's the Neon
work, for example: [http://calculist.org/blog/2015/12/23/neon-node-
rust/](http://calculist.org/blog/2015/12/23/neon-node-rust/)

------
nwah1
Why don't more distros use the lighter weight C runtimes?

~~~
daeken
Many applications don't work with anything but glibc, so distributions don't
use them; because distributions don't use them, applications don't support
anything but glibc.

~~~
tomjakubowski
The number of applications that rely on malloc/free being multithreaded fork
safe, like glibc's allocator, is astounding. Off the top of my head, any
application that uses glib and g_spawn_* is quite unportable for this reason.

------
AndyMcConachie
I have a question that hopefully someone can clear up for me.

If I understand the Google sec article correctly, this requires a single
packet > 2048 bytes to be received by a host using glibc.

> The vulnerability relies on an oversized (2048+ bytes) UDP or TCP response,
> which is followed by another response that will overwrite the stack.

Is my understanding correct?

If it is, then it's worth pointing out that many links on the public Internet
have an MTU of 1500 bytes. This is an historical legacy of original Ethernet
from the 80's. Path-MTU-Detection(PMTUD) doesn't really work on the Internet,
so it's safest to assume that you only ever get 1500 bytes.

Given all that, this places a burden on anyone wanting to eploit this. Since
they cannot assume a PMTU greater than 1500 bytes between endpoints, they're
limited in how they can exploit the bug. Correct?

Please correct me if I'm wrong. I always feel these bug reports are missing
that vital piece of information I need to operationalize the bug. And thanks.

~~~
sveiss
Datagrams larger than the MTU can be sent, they'll just be fragmented. The
point of PMTU is to avoid fragmentation where necessary, but it's still
possible.

~~~
AndyMcConachie
_Sometimes_ datagrams larger than 1500 bytes can be sent, sometimes they won't
reach their destination. Fragmentation doesn't really work, and at the very
least will almost always ensure out-of-order delivery. Either way you can't
depend on it.

PMTUD doesn't work at all on the modern Internet, not for IPv4, and not for
IPv6. So again, I'm just wondering how dangerous this exploit really is. I'm
not saying it isn't dangerous, I'm saying our understanding of it should be
tempered somewhat due to many links not fragmenting properly that have MTUs of
1500 bytes.

------
t0mk
Is getaddrinfo usually statically linked or dynamically linked to stuff?

Which pkgs on Ubuntu will be necessary to upgrade once they roll the fix to
the repos?

~~~
zzzcpan
This is one of those things that usually cannot be linked statically at all.
So, not many things to upgrade.

~~~
umanwizard
Why can't it be linked statically?

~~~
fpoling
Glibs uses /etc/nsswitch.conf to decide at runtime which libraries has to be
loaded dynamically to implement DNS resolver, user lookup etc. One in
principle can compile those libraries statically, but it is rather complicated
for no gains, so almost nobody is doing that.

~~~
umanwizard
Interesting, thanks!

------
pilif
How is this related to CVE-2015-0235 (the GHOST vulnerability last year)?

~~~
jimrandomh
They are separate vulnerabilities in the same codebase, both affecting DNS
clients. This one is new, so having previously patched to protect from
CVE-2015-0235 does not make you safe from CVE-2015-7547.

------
Gratsby
Has anyone put together a POC that doesn't require re-pointing the system
nameserver and crashing other applications?

~~~
arbruijn
You can create a small chroot with a custom resolv.conf.

Example (tested on Ubuntu 14.04 64-bit):

    
    
      mkdir -p root/lib/x86_64-linux-gnu root/lib64 root/etc
      cp -a --parents /lib/x86_64-linux-gnu/lib{c,nss,resolv}* root
      cp /lib64/ld-linux-x86-64.so.2 root/lib64
      cp CVE-2015-7547-client root
      echo 'nameserver 127.0.0.1' > root/etc/resolv.conf
      sudo chroot root ./CVE-2015-7547-client
    

If it prints nothing it crashed. Try running with strace to be sure.

~~~
Gratsby
Brilliant. Thank you.

------
fpoling
I wonder why alloca+malloc/free was used in the first place and not
straightforward malloc/realloc/free. The overhead of the latter should be
negligible given that this is a DNS resolver. The overhead in fact could be
negative due to simpler code and better cache utilization.

Premature optimization is a root of all evil indeed.

------
JabavuAdams
I apologize for my bad behaviour on this thread. I'm in a bad place, mentally.

------
maxima120
I think new generation wants to take over linux with their new ideas but in
reality they just want to prove themselves and prepared to completely ruin it
in the process.. well. Cos they just young and want new things. They know big
words and think they now the world and they hate their parents... so in short
- they want to do to the Linux what the previous generation did to windows...
karma.

P.s. whatever language you use its irrelevant. Bugs are in the heads. Code is
just the reflection.

------
totony
If this causes you serious problems, you should really consider using a
patched grsecurity kernel (even though their stable versions arent free
anymore)

------
LinuxBender
Has anyone actually tried the PoC on their systems? I will test on CentOS 6
and 7 after I have had my coffee. Anyone willing to volunteer to test on
Ubuntu and Debian?

Here is CentOS 7

    
    
        [  389.064412] do_general_protection: 159 callbacks suppressed
        [  389.064416] traps: CVE-2015-7547-c[1161] general  rotection ip:7fa6b0d8fd67 sp:7ffdaf034a30 error:0 in libresolv-2.17.so[7fa6b0d87000+16000]

~~~
lukeasrodgers
I tested my ubuntu 12.04 and it did not seem to be affected.

    
    
      server:
      vagrant@precise32:/tmp/CVE-2015-7547$ sudo python CVE-2015-7547-poc.py
      [UDP] Total Data len recv 36
      [UDP] Total Data len recv 36
    
      client:
      ./CVE-2015-7547-client
      CVE-2015-7547-client: getaddrinfo: Name or service not known
    
      vagrant@precise32:/tmp/CVE-2015-7547$ lsb_release -a
      No LSB modules are available.
      Distributor ID: Ubuntu
      Description:    Ubuntu 12.04 LTS
      Release:        12.04
      Codename:       precise
    
      vagrant@precise32:/tmp/CVE-2015-7547$ ldd --version
      ldd (Ubuntu EGLIBC 2.15-0ubuntu10) 2.15
      Copyright (C) 2012 Free Software Foundation, Inc.
      This is free software; see the source for copying conditions.  There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
      Written by Roland McGrath and Ulrich Drepper.

~~~
atom_enger
I'm seeing the same behavior here. Thanks for the update and confirmation. Can
others confirm if this means 12.04 isn't impacted by this cve?

------
mariuolo
There's something I don't understand: if it's from 2015, how come it hadn't
been fixed until now? At least for debian.

------
Mojah
A summary of the problem, the affected Linux versions and patching
remediations have been posted here: [https://ma.ttias.be/critical-glibc-
buffer-overflow-vulnerabi...](https://ma.ttias.be/critical-glibc-buffer-
overflow-vulnerability-getaddrinfo-linux-cve-2015-7547-cve-2015-5229/)

------
ComputerGuru
Everyone's going crazy advising everyone else to update, but the glibc
homepage is happily, statically sitting on version 2.22 from 2015-08-14.

Maybe we should start by releasing an update there asap, and go from there?

(I just switched our last Linux server over to FreeBSD, despite some software
we use not being available; so I'm happy to sit this one out.)

------
leesalminen
I haven't seen anything come down through yum or apt-get yet. Does anyone know
how this can be patched prior to that?

~~~
Trellmor
Debian has published security announcements for glibc [1] and eglibc [2].
Updates should be available from
[http://security.debian.org/](http://security.debian.org/) (and I was able to
install them).

Also, keep this in mind:

>While it is only necessary to ensure that all processes are not using the old
glibc/eglibc anymore, it is recommended to reboot the machines after applying
the security upgrade.

[1] [https://lists.debian.org/debian-security-
announce/2016/msg00...](https://lists.debian.org/debian-security-
announce/2016/msg00051.html)

[2] [https://lists.debian.org/debian-security-
announce/2016/msg00...](https://lists.debian.org/debian-security-
announce/2016/msg00050.html)

------
amelius
Is there something that can be done about this on the network level? I mean,
it seems almost impossible to assure that every instance of getaddrinfo is
patched.

I'm thinking about a background tool (iptables plugin?) that simply truncates
long DNS replies, so that they can never cause a buffer overflow.

~~~
JdeBP
See
[https://news.ycombinator.com/item?id=11112923](https://news.ycombinator.com/item?id=11112923)
on this page.

~~~
amelius
Thanks!

------
jedisct1
Don't panic, don't spread fear:
[https://00f.net/2016/02/17/cve-2015-7547/](https://00f.net/2016/02/17/cve-2015-7547/)

------
rms_returns
Can someone explain in layman's terms how will this affects me as a Linux user
who works on Ubuntu?

~~~
djcapelis
Update your machines and you should probably restart too so any running
processes link to the new libc.

~~~
inopinatus
This. Far too often I see libraries updated by a security patch but without a
restart of affected dependent processes, meaning you're not protected.

    
    
      sudo lsof -n | grep DEL
    

is your friend in this matter.

------
Erwin
It seems tcp_wrappers-libs is using getaddrinfo, so if you have some rules
setup there that may be an attack vector. I'm not sure if sshd will want to do
a getaddrinfo if you don't have some tcp wrappers rule set up in
/etc/hosts.{deny,allow}.

~~~
JdeBP
See
[https://news.ycombinator.com/item?id=8957680](https://news.ycombinator.com/item?id=8957680)
for earlier discussion of that latter point.

------
newman314
Ubuntu is now patched.

[http://people.canonical.com/~ubuntu-
security/cve/2015/CVE-20...](http://people.canonical.com/~ubuntu-
security/cve/2015/CVE-2015-7547.html)

------
Ono-Sendai
Looking at that code, which is a tangle of goto statements and buffer
allocations and accesses, it's a miracle any of it works in the first place. I
bet there are tons more bugs in there.

~~~
matthewaveryusa
Tangle of gotos? Those gotos all look reasonable -- they go to return
conditions no longer than a few lines. Buffer allocations? They allocate to
2-3 buffers. That's all very reasonable. allocations and gotos aren't the
devil in C.

~~~
Ono-Sendai
Well, the control flow seems complicated at least.

~~~
0x4a42
No

~~~
carterehsmith
.

------
cft
If I point the local DNS resolver to Google's DNS server as a temp fix
(8.8.8.8) in /etc/resolv.conf , will that mitigate the threat before the
patch?

~~~
JdeBP
The problem is DNS/UDP and DNS/TCP responses that are larger than 2048 bytes
hitting your BIND DNS client library. Do you have any reason to believe that
Google Public DNS (in your part of the world) always limits its responses to
smaller than 2048 bytes?

~~~
cft
Google's DNS relay may take steps to sanitize malicious DNS responses.

------
mmosta
Newbie questions:

Are "upload file by URL" functions potential vectors? (payload in malcious dns
response)

Is this contingent on a cooperating DNS server (not truncating the record?)

------
takeda
Fun fact: If you have programs written in Go, after patching this you probably
will need to recompile all of them.

~~~
AsyncAwait
Go 1.5 uses its own pure-Go DNS resolver.

~~~
acln
Under certain conditions, it does not.

[https://golang.org/pkg/net/#hdr-
Name_Resolution](https://golang.org/pkg/net/#hdr-Name_Resolution)

~~~
justincormack
When it does not it dynamically links.

~~~
takeda
That does not make sense.

Among others the page lists:

> [...] when /etc/resolv.conf or /etc/nsswitch.conf specify the use of
> features that the Go resolver does not implement, and when the name being
> looked up ends in .local or is an mDNS name.

Those things you can't check at compile time.

In any case, you can tell whether the binary is statically linked or not.

Statically linked:

    
    
        $ file /usr/bin/consul 
        /usr/bin/consul: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped
        $ ldd /usr/bin/consul
                not a dynamic executable
    

Dynamically linked:

    
    
        $ file /bin/echo
        /bin/echo: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, stripped
        $ ldd /bin/echo
                linux-vdso.so.1 =>  (0x00007fff01fff000)
                libc.so.6 => /lib64/libc.so.6 (0x0000003bcd800000)
                /lib64/ld-linux-x86-64.so.2 (0x0000003bcd000000)
    

Notice the libc.so.6

------
weinzierl
Can this be used to exploit DNS servers via other rogue DNS servers? What if I
setup a rogue DNS and wait for it to be queried by 8.8.8.8. When I own 8.8.8.8
I can exploit every client that queries it. Browsers use getaddrinfo(), don't
they?

~~~
inopinatus
No. Recursive resolvers do not use getaddrinfo(3) to resolve DNS queries.

~~~
weinzierl
That was what I was thinking but it was stupid idea.

My point is that if you could use this bug to use one rogue DNS to own other
DNSes this could be leveraged to bring large parts of the Internet down.

I don't know how hard or easy this could be. A quick glance into PDNS reveals
four calls to getaddrinfo()[1]. So it might be possible...

[1]
[https://github.com/PowerDNS/pdns/search?utf8=%E2%9C%93&q=get...](https://github.com/PowerDNS/pdns/search?utf8=%E2%9C%93&q=getaddrinfo)

~~~
JdeBP
Out of those four calls:

One uses AI_NUMERICHOST and thus doesn't do any DNS client activity. A second
isn't supplied arbitrary domain names as input, they being the ones configured
for the location of a specific back-end data source by the server
administrator. The third is used only inside a "zone transfer" subsystem and
again isn't supplied arbitrary domain names as input, the domain names again
being in local data specified by the server administrator.

And the fourth is not even part of the executable program, the code being
disabled with #if 0 .

The "S" in "DNS" stands for "system", not "server", by the way. A rogue DNS
_server_ would "own" other DNS _servers_.

------
alblue
Does this affect OSX?

~~~
sigjuice
No. OS X has its own implementation.

[http://www.opensource.apple.com/source/Libinfo/Libinfo-477.2...](http://www.opensource.apple.com/source/Libinfo/Libinfo-477.20.1/lookup.subproj/libinfo.c)

------
el8c0d3r
Don't use SQL or PHP either! These are vulnerable to bugs!

------
rtpg
Are there any big efforts to rewrite glibc in something like rust? ... Is that
a thing that is even possible? An in-place replacement library for dynamic or
static linking.

I'm really worried that I still hear about buffer overflows in this day and
age. Of all the libraries in the world, glibc should probably be written in
some subset of Idris that compiles into 100% safe C. We have the technology to
move to this now

~~~
autoreleasepool
I'm not trying to be rude, but I'm sorry...

Are you seriously suggesting that rewriting an implementation of the C
standard library in a language that isn't C is something that makes any bit of
sense?

I think you have a fundemental understanding of the roll the C-standard
library plays in the C language. Not to mention UNIX in general.

Maybe you're suggesting we need an OS based in Rust instead of C (which is
more reasonable), but in that case why wouldn't you use Rust's standard
library as the backbone of that system?

Let me put it this way, if the GNU/Linux C standard library gets rewritten in
Rust or Idris (sigh... seriously?) I will switch to Windows 10 with default
settings for the rest of my life. I would rather have the NSA log my every key
stroke than participate in a community where that is considered a good idea.

~~~
arielb1
Rust can have the same ABI as C - rewriting NSS in Rust should not pose ABI
problems.

~~~
autoreleasepool
It's still completely pointless. How would mmap be any different if it was
implemented in Rust rather than in C? You need unsafe to implement it! The
safety mechanisms are practically useless at such a low level.

Wrap existing C with safe Rust, not the other way around! We don't rewrite
unsafe C code in Unsafe Rust so that it can be called from all the unsafe C
code in UNIX. Such a translation is so pointless and error prone.

I can't even believe I'm the one being called backwards here.

~~~
dbaupp
_> How would mmap be any different if it was implemented in Rust rather than
in C? You need unsafe to implement it! The safety mechanisms are practically
useless at such a low level._

They're definitely not. The power of Rust is not avoiding all `unsafe` ever,
but wrapping that unsafe into finite-scope, _safe_ wrappers. Something like
mmap would presumably interface with the OS's internal memory management
routines, which would definitely have to have `unsafe` somewhere since they're
hitting the hardware, but it seems very reasonable for the amount of unsafe
exposed to much reduced (possibly even to zero). See, for instance,
[http://os.phil-opp.com/modifying-page-tables.html](http://os.phil-
opp.com/modifying-page-tables.html) . Basically, as soon as you're above the
absolute raw hardware level, you can start introducing extra help to reduce
unsafety/catch bugs.

In any case, code inside `unsafe` still benefits from all the conventional
Rust checks, e.g. iterating over a slice won't go out of bounds due to a typo
in the loop even inside `unsafe`, nor will references accidentally become
dangling. (Of course, the `unsafe` block may do something explicitly that
causes either of these problems, but this exact same risk is pervasive in
_all_ C code, rather than just around explicitly marked areas.)

 _> Wrap existing C with safe Rust, not the other way around! We don't rewrite
unsafe C code in Unsafe Rust so that it can be called from all the unsafe C
code in UNIX. Such a translation is so pointless and error prone._

Note that this is exactly what is done now, the features of Rust for making
safe interfaces makes Rust often a far nicer way to use C interfaces than C
itself, without overhead (IMO, of course). See [http://blog.rust-
lang.org/2015/04/24/Rust-Once-Run-Everywher...](http://blog.rust-
lang.org/2015/04/24/Rust-Once-Run-Everywhere.html#safe-abstractions) , for
example. Of course, zero-overhead Rust is only as safe as the C code it wraps,
it can't (in general) protect against the C code not correctly implementing
its stated contract(s).

~~~
tomjakubowski
It seems hard (impossible?) to write a safe, full-featured mmap wrapper in
Rust, because of the potential for multiple processes sharing mutable access
to the memory and defeating Rust's aliasing checks.

Maybe you could separately map shared RefCells to guard the pages returned by
the mmap wrapper? If the region is backed by a file, you'd need to check those
RefCells on every IO access too, though.

~~~
eddyb
What you can do for mmap is have several abstractions (or one using generics
and phantom types), one for each different set of usecases, with different
access modes.

Examples would be:

* read-only: &ROMemMap -> &[u8]

* read-write: &mut RWMemMap -> &mut [u8]

* write-only: fn set(self: &mut WOMemMap, i: usize, b: u8),

or more generally: &mut WOMemMap -> &mut [WriteOnly<u8>] where WriteOnly<T>
has fn set(&mut self, T)

Now for the shared case, consider this: aliasing rules can be avoided with
atomic operations, i.e. Arc<AtomicUsize> is shared and can be safely
(atomically) read/written by multiple threads.

In the multi-process case, you could provide an atomic API, although we don't
currently seem to expose byte-level atomics (likely not present on some
platforms) so if you wanted to write a demo you'd need to use the unstable
intrinsics atm.

FWIW restricted to single-threaded code, this results in the Cell get/set API
which is not hardware-atomic but cannot overlap with other accesses to the
same memory, as Cell doesn't implement Share so any threading abstraction will
block you from doing any kind of sharing of Cells.

That is, you can share &[Cell<u8>] pointing to any bag of bytes that sits in
read-write memory, and anything in the same thread can read or write to it,
safely.

