
Bad Binder: Android In-the-Wild Exploit - el_duderino
https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html
======
Groxx
> _The Original Discovery of the Bug_

> _This bug was originally found and reported in November 2017 and patched in
> February 2018. Syzbot, a syzkaller system that continuously fuzzes the Linux
> kernel, originally reported the use-after-free bug to Linux kernel mailing
> lists and the syzkaller-bugs mailing list in November 2017. From this
> report, the bug was patched in the Linux 4.14, Android 3.18, Android 4.4,
> and Android 4.9 kernels in February 2018. However, this fix was never
> included in an Android monthly security bulletin and thus the bug was never
> patched in many already released devices, such as Pixel and Pixel 2._

Yea, that's a very large number of active devices, for a bug that's believed
to be actively exploited. Roughly 75% going by this:
[https://android.stackexchange.com/questions/51651/which-
andr...](https://android.stackexchange.com/questions/51651/which-android-runs-
which-linux-kernel) plus
[https://developer.android.com/about/dashboards](https://developer.android.com/about/dashboards)

~~~
bscphil
Even my currently supported device (Moto x4) is vulnerable because it hasn't
received the October 6 patch. It's more than halfway through November, and I'm
still on the October 1st patch. The vendor patching system is pretty terrible
with Android and ends up leaving many devices vulnerable to publicly known
exploits most of the time.

~~~
sigmar
You don't need this patched. moto x4 has had this patched since the release of
pie.

You can check for yourself at the source code here:
[https://github.com/MotorolaMobilityLLC/kernel-
msm/blob/MMI-P...](https://github.com/MotorolaMobilityLLC/kernel-msm/blob/MMI-
PPW29.69-26/drivers/android/binder.c)

Most devices already have the patch from the upstream kernel:
[https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...](https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/android/binder.c?h=linux-4.14.y&id=7a3cee43e935b9d526ad07f20bf005ba7e74d05b)

~~~
bscphil
That's good news, thanks.

------
gundmc
P0 has gotten a lot of unwarranted flak on HN recently for allegedly only
disclosing vulnerabilities in competitors' systems.

This is a perfect counterexample of a really nasty privilege escalation in
Google's own OS.

~~~
LMYahooTFY
I'm interested in arguments as to how said flak could qualify as valid
criticism.

Security research is inherently adversarial in nature, and it seems fitting to
have competiting parties doing security research on one another's products.

Presumably, Android development involves some measure of security assessment?

If project zero never spent a day looking at Android, but all their
competitors did, I don't see the issue.

If there aren't any/enough competitors, that seems very unlikely to be a
security or security research related problem.

~~~
adrianmonk
> _Security research is inherently adversarial in nature_

Interesting. My perception is it's often based on bragging rights. Which is
more about ego than about an adversary. According to that theory, what matters
is how deeply you understand systems or how determined you are to go the extra
mile to find issues.

This extends to organizations which want to bolster their image by being at
the leading edge of research.

Anyway, having an adversary is part of the picture, but what you really care
about is not the victory over that adversary but your superiority on the
battlefield

~~~
panpanna
Please read "the security mindset" by Bruce Schneier:

[https://www.schneier.com/blog/archives/2008/03/the_security_...](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html)

~~~
adrianmonk
Perhaps I was taking "adversarial" too literally, but to me it normally
suggests an antagonistic or hostile attitude toward the other side. For
example, if two next door neighbors don't get along and one of them reports
any little infraction to their homeowners' association, they are adversarial.
It's sort of the opposite of cooperative.

And this is not how I see the motivation and attitude of most security people.
For them it is mostly about the satisfaction of (or other inclination toward)
understanding how and where something might be vulnerable to exploit. It is a
particular type of thinking related to creativity, thinking outside the box,
and seeing things from a different perspective. (So basically what Schneier's
essay says. Which fits with my point.)

There is nothing sophisticated or clever about a neighbor calling the
homeowners' association. What they're interested in is the effect their
actions will have on their adversary. But a security researcher doesn't
usually care to actually exploit vulnerabilities. Or if they do, it is only to
prove that the vulnerability exists, not to gain from it.

So, getting back to the original point, I just don't follow the reasoning that
security researchers would prefer to avoid finding holes in their own
employer's systems. If they viewed everything as us vs. them, then yes, they
would want to take sides and protect their employer. Instead, I think that
because what they really care about is understanding vulnerabilities, they
would want to understand them wherever they see them, own employer's systems
included.

------
kdbg
Kind of relevant, a friend of mine posted his own writeup about exploiting
this bug earlier this month.

[https://dayzerosec.com/posts/analyzing-androids-
cve-2019-221...](https://dayzerosec.com/posts/analyzing-androids-
cve-2019-2215-dev-binder-uaf/)

------
wyldfire
"NSO" is referred to in the article, but it's never expanded or explained.

Apparently this is the "NSO Group" [1], a private company in Israel that sells
"Pegasus" [2] (also referred to in the article).

[1]
[https://en.wikipedia.org/wiki/NSO_Group](https://en.wikipedia.org/wiki/NSO_Group)

[2]
[https://en.wikipedia.org/wiki/Pegasus_(spyware)](https://en.wikipedia.org/wiki/Pegasus_\(spyware\))

~~~
tptacek
It's not an article so much as a P0 blog post. The list of terms of art in
this post that ordinary readers wouldn't grok is long indeed.

NSO is extraordinarily well known in the security field; they're one of the
best known (but almost certainly not the largest or most effective) purveyors
of mobile exploits and malware (to governments). For P0 (and Apple), NSO is
"the adversary".

~~~
rafaelm
OK, I'm going to have to ask: who are the largest and most effective?

~~~
PeterisP
Cellebrite is a popular supplier for mobile forensics, which may require using
vulnerabilities are not yet patched, i.e. finding or buying zerodays.

~~~
lawnchair_larry
Different market and different product. Cellebrite customers have the devices
in hand, and need to extract data for forensics. NSO customers compromise
phones remotely and silently in order to spy on what the owner is doing.

~~~
hunter2_
How does NSO and/or it's customers steer clear of CFAA violations? Are they
given some kind of perpetual amnesty?

Edit: Oh, they ran up against this just a few weeks ago according to their
Wikipedia article. How about that.

~~~
tptacek
In addition to the fact that they're not a US company, selling exploits
doesn't actually violate CFAA.

Google could potentially sue them under civil CFAA if there was some
unauthorized access to Google infrastructure needed to develop the exploits,
but that's unlikely to be the case.

 _Using_ NSO tools against unwilling targets would violate US law, but that's
not what NSO does.

~~~
zigzaggy
According to recent articles I’ve read this may not be the case. Apparently at
least one organization has accused NSO of doing the unlawful accessing.

Standby while I look for a source.

~~~
tptacek
Yes, WhatsApp sued NSO under civil CFAA (among other things) for accessing
their infrastructure in the process of building and running their tools.

------
markstos
Interesting that Google's syzbot found and publicized the flaw, possibly
bringing it the awareness of bad actors. But at the time the flaw was not
flagged as being security related and was thus not backported as a security
patch.

If Google's syzbot had checked the kernel for the flaw before it was released
instead of after, that also seems like it would have prevented the issue from
going live in the first place.

Why does the Linux kernel project continue to release code with flaws like
this that can be found with automated tools?

~~~
outworlder
> Why does the Linux kernel project continue to release code with flaws like
> this that can be found with automated tools?

Because they have no other option, it's a fuzzer. Therefore, it may take a
long time(up to universe heat death) for it to finally exercise the path that
causes the issue. And the kernel has a pretty enormous footprint.

Fuzzers never really terminate, so it is not like you can plug it on a CI/CD
system and wait for reports.

> The process of reproducing one crash may take from a few minutes up to an
> hour depending on whether the crash is easily reproducible or non-
> reproducible at all.

That's for one known crash. But otherwise it will be running 24/7 (across
multiple VMs!) looking for issues.

More details here:
[https://github.com/google/syzkaller](https://github.com/google/syzkaller)

~~~
fulafel
Depending on what was meant by "like this", they do have the option of fixing
this class of bugs (memory safety vulns), using mature tried and tested
programming language practices and technology. It's a decades old debate
between the C enthusiasts and the correctness/security enthusiasts. (Same
problems and options are of course available Apple and Microsoft - it's not
just Linux)

~~~
saagarjha
What technology would you have suggested be used here to catch this
vulnerability early?

~~~
Thorrez
fulafel is not advocating catching the vulnerability, fulafel is advocating a
more secure design. One possibility is Rust.

~~~
fulafel
Well, Rust is not that mature, I agree today it would be an option worth
considering. But people have been writing ring-0 code in safe languages for
decades. And of course there are more dimensions to robust kernel design than
chocie of programming language.

------
ogre_codes
> Hunt for bugs based on rumors/leads that a 0-day is currently in use. We
> will use our bug hunting expertise to find and _patch the bug, rendering the
> exploit benign_.

This is probably the most fundamental issue with Android security. Google
patching the bug often doesn't render it benign. There are _still_ far too
many places in the Android update process where bugs can remain _malign as
hell_ on millions of devices even after Android itself has been patched and
the user keeps their device to the latest available version.

------
badrabbit
Why can't google/blogspot ever work well on mobile!? Google hides sites that
are not mobile friendly but they don't fix this? I bet it's their user-agent
detection where they serve you different content based on the mobile browser
you use (i.e.: if you don't use a popular browser, even if it uses webkit like
chrome you get crappy content)

~~~
fulafel
Works well in mobile Firefox for me.

~~~
badrabbit
My mobile browser is based on firefox and it doesn't for me. Which is my point
about UA detection.

------
TheCraiggers
> On October 3, 2019, we disclosed...

> We reported this bug under a 7-day disclosure deadline rather than the
> normal 90-day disclosure deadline. We made this decision based on credible
> evidence that an exploit for this vulnerability exists in the wild and that
> it's highly likely that the exploit was being actively used against users.

It's now nearly two _months_ since discovery of the bug and that it was being
exploited in the wild, and my still-in-support Pixel still doesn't have the
patch for this. What the hell Google. How is this even close to alright for a
CVE such as this? Totally unacceptable.

Give me my damn PinePhone already!

~~~
Jwarder
Are you sure your Pixel is vulnerable? Looks like the patch for Pixel 1 & 2
was released October 7th.

[https://source.android.com/security/bulletin/pixel/2019-10-0...](https://source.android.com/security/bulletin/pixel/2019-10-01)

