
Chrome OS exploit: WebAsm, Site Isolation, crosh, crash reporter, cryptohomed - chx
https://bugs.chromium.org/p/chromium/issues/detail?id=766253
======
kevingadd
It's kind of depressing to see that a considerable portion of this depends on
design decisions that probably seemed safe at the time but are wildly
dangerous in hindsight.

Getting access to the privileged 'crosh' extension in this exploit depends on
getting code into its process, which depends on the odd choice to put
unrelated extensions into a shared extension process when there's too many
processes. Given the insecure nature of the Chrome App Store, this was
questionable to begin with - should a potentially backdoored bookmarking
extension really share a process with my password safe? - but once privileged
Chrome OS code started running in extension processes this immediately became
a huge security hole waiting to be exploited.

In this case they also don't even need a compromised extension, because they
can (if I understand this right) impersonate an extension's frame and then
send messages of their choice to the extension. The access control here is
almost entirely done inside the content process, where potentially compromised
content is running. It's hard to avoid this since most extension code runs
inside content processes by design, but it's a weakness that is rarely (if
ever) called out in the chrome extension documentation.

The webassembly exploit part of the chain bums me out (I was always afraid of
stuff like this when I was working on the design for it) but it's pretty
uninteresting, really. The simple sort of bug you get when you insist on
writing stuff in C++.

The parts of the chain after getting access to crosh also seem like tough-to-
avoid oversights. This set of attacks definitely make symlinks look like a
problem child, given how useful they are for all sorts of naughty behavior the
attack gets up to :)

~~~
eugeneionesco
>The webassembly exploit part of the chain bums me out (I was always afraid of
stuff like this when I was working on the design for it) but it's pretty
uninteresting, really. The simple sort of bug you get when you insist on
writing stuff in C++.

I really hope people don't think webassembly is the fault for this, this
vulnerability is no different from any other memory corruption vulnerability
you would find in the js interpreter or the css parser or whatever.

~~~
standupstandup
Well, WebAssembly's primary near term contribution will be introducing the
world of C++ exploits to web apps, which are already groaning under the load
of XSS, XSRF, path traversal, SSRF etc attacks. Adding double frees, use after
frees and buffer overflows on top doesn't seem ideal.

As for the rest, well, it'd be nice if there was any sort of plan to make
Blink safer. I know about oilpan but what Mozilla is doing with Rust is
impressive. The JVM guys are working on rewriting the JVM in Java. What's
Blink's plan to make its own code safer? Sandboxing alone?

~~~
steveklabnik
What exploits are you specifically worried about with wasm?

The ones you call out in this post don’t have the same impact as native, even
when it’s C or C++ compiled to wasm.

~~~
standupstandup
[http://foo.bar.com/url?q=<base64](http://foo.bar.com/url?q=<base64) encoded
stuff>

wasm program parses q

stack smash occurs

ROP chain is used to gain code execution

user cookie is stolen

attacker now controls your account

I don't know enough about wasm to know if it has some special mitigations for
this but when I looked at it, wasm seemed closer to a CPU emulation than a
high level language VM. Flat memory space, no GC, no pointer maps.

~~~
kodablah
WASM memory is a set of memory specific to a module (and they only allow one
memory instance right now). It can be imported/exported to other modules, but
there is no sandbox escape (in theory). For the web backend, it's just backed
by a UInt8Array IIRC. It's all userland. If anything escapes the WASM
interpreter/compiler, it is the fault of the interpreter/compiler (as is the
case here) and not the fault of the WASM bytecode itself which has no escape
mechanism. Think of a WASM VM just like a JS VM. Even though it may appear low
level just because it can JIT better/cleaner, it operates in the same arena as
JIT'd JS (at least for the web target).

~~~
kevingadd
You don't need to escape a sandbox when the application has access to all the
user's data.

The attack surface of a gmail implemented in C++-compiled-to-wasm is almost
certainly going to be larger than a gmail implemented in JS, because the
runtime environment is vulnerable to double frees and heap corruption and
other attacks, even if it won't escape the browser sandbox. My gmail tab
basically has access to my entire life.

~~~
kodablah
I don't understand. In the gmail example, the attack surface to who, a
malicious email sender? As in something being handled by wasm in the browser
has a better chance at XSS than if it was handled with JS? Why would untrusted
content like that be handled by a client-side language anyways? Whether it is
wasm, JS, wasm-interpreted-by-a-JS-interpreter, JS-interpreted-by-
a-c++-intrpreter, wasm-interpreted-by-a-c++-interpreter or whatever the risks
are similar. If you are talking about untrusted wasm or JS scripts accessing
things inside the same sandbox, that's a different vector and it's less about
the size of surface area and more about the introduction of the vector in the
first place.

~~~
kevingadd
Simple example (though not something I think the Gmail team would actually
ship): I want to load a .png file that's attached to an email. If I decide to
use a build of libpng I control (for example, to work around broken color
profile support in browsers), a bug in that libpng build could allow privilege
escalation within the tab to get access to my gmail contacts or send emails.
Bugs in loaders for image file formats are not unheard of, and people treat
image files as innocuous.

~~~
kodablah
Ah that example makes it clear. I'll ignore the obvious issue of libpng being
written in JS having a similar problem. So the libpng WASM would have its
memory instance (probably imported from the caller) and functions would
reference the memory. It's not like with regular RAM where if you overflowed a
mem write that it would write executable instructions. Code is different from
memory. There is no eval. There is a "call_indirect" which can call a function
by a dynamic function index, but what would a dangerous function that libpng
imported? You can't execute memory or anything.

I can see some site DOSing though where you use the equivalent of a png-bomb
to blow up CPU from the parser, but that is any not-meticulously written
client-side parser of untrusted input.

So while you can toy with memory and maybe even affect the function pointer
before a successive indirect call, it's not near as dangerous to the outside-
of-wasm world as raw CPU instructions. I can see an issue where the caller
that imports libpng and exports his memory to it might have something secret
in that memory...hopefully multi-memory and GC-like-structs and what not can
make passing and manipulating large blocks more like params than shared mem
(and all of shared mem's faults).

~~~
kevingadd
It's a bit extreme but the reality is that a lot of production libraries tend
to pull in imports that are as dangerous as eval, because the scope of the
library is enormous, or it's actively supposed to interact with the DOM or JS.
At that point, if someone can more trivially exploit it with a double free or
buffer overflow, you've increased your security risk relative to JS (because
overflowing a Uint8Array is basically never going to result in arbitrary
function invocation)

The way function addresses are sequential in wasm tables (and deterministic)
also means it is probably easier to get to the function you want once you get
code execution.

------
jacksmith21006
This is a text book example on why ChromeOS makes sense. Here this bug was
found, fixed and because of the evergreen model deployed to actual people's
machines and the users never even knew anything about it.

~~~
woodrowbarlow
all modern operating systems are evergreen. how is ChromeOS unique in regard
to bugfixes?

~~~
blacksmith_tb
My memory is that ChromeOS defaults to aggressively auto-updating itself,
where macOS and Windows both remind / pester / offer to defer. So greener than
evergreen?

~~~
nikanj
Windows today installs updates automatically and reboots when it's the least
convenient. For a normal user, it's near-impossible to make it stop, and most
people just grudgingly accept the lost work and ruined movie nights.

You can google for a solution, but that solution only works on last year's
version, the professional edition, or similar.

~~~
PuffinBlue
> Windows today installs updates automatically and reboots when it's the least
> convenient

I thought that was how it worked too having just switched back to having to
use windows.

But you can pick a time of your choice and specify a day up to about 5 days in
the future or something, so it's not as bad as I thought it was going to be.

Coming from Linux it still an annoying difference between the operating
systems but not as bad as people tend to make out.

~~~
will_hughes
> But you can pick a time of your choice and specify a day up to about 5 days
> in the future or something, so it's not as bad as I thought it was going to
> be.

Only if you catch it while it's prompting.

I've seen people set up for a meeting/presentation/etc, test everything's
working and then walk away from their computer to grab a drink. In that time,
Windows has popped up a "Hey, we're going to perform updates" dialog which if
you don't catch it before it disappears, becomes uncancellable/undeferrable.
Depending on the update (eg Creators Edition) it then spends the next 20-45
minutes with the entire machine being unusable.

I've seen many panicky tweets from people who're about to give a talk in an
hour or two that they've just opened their laptop and it's resumed into the
dreaded "Oh, we're just updating now!" screen.

------
zerebubuth
Wow! That's really impressive. Seems like a _lot_ of work and ingenuity went
into this.

It's great that large, corporate projects like Chrome OS are attracting the
sustained attention necessary to find bugs such as this one. But I worry that
projects without such deep pockets are crowded out, leaving bugs unreported.
Are many people doing security audits of open source projects without bug
bounties?

~~~
delroth
Google has been doing something close to bug bounties for many "critical" open
source projects. Instead of focusing on bugs however, the Patch Rewards
focuses on countermeasures: integrating a project into OSSFuzz, adding
sandboxing, etc.

[https://www.google.com/about/appsecurity/patch-
rewards/](https://www.google.com/about/appsecurity/patch-rewards/)

------
Izmaki
It’s great to see the bug program in effect. Also a huge thank you to all the
whitehats our there!

------
mbertschler
Does reward 100000 mean 100k$ in bounty?

~~~
Ajedi32
Yes. [https://www.google.com/about/appsecurity/chrome-
rewards/](https://www.google.com/about/appsecurity/chrome-rewards/)

> We have a standing $100,000 reward for participants that can compromise a
> Chromebook or Chromebox with device persistence in guest mode

~~~
euyyn
Searching by label shows only another instance of a bug that got paid that
amount:

[https://bugs.chromium.org/p/chromium/issues/detail?id=648971...](https://bugs.chromium.org/p/chromium/issues/detail?id=648971&can=1&q=label%3Areward-100000&colspec=ID%20Pri%20M%20Stars%20ReleaseBlock%20Component%20Status%20Owner%20Summary%20OS%20Modified)

It was actually reported by the same guy!

~~~
esturk
I wonder if s/he save a bunch of bugs to chain together a single big exploit.
From the latest report, there was 6 bugs chained together.

Imagine reporting 6 small bugs individually that nets you 6 x 1000$ = 6k$. But
if you save each one, it may chain together for a potential 100k$ bounty. Of
course, any insight that reveals these underlying relations is most certainly
worth 100k$.

~~~
lomnakkus
AFAIUI (not a security researcher) that's actually how most of the most
devastating security exploits work. There are obvious exceptions like
HEARTBLEED, but in general escalation through multiple levels seems to be the
name of the game.

~~~
rodjun
Or you can just get root from javascript, which that guy also did

------
free2chill
How do people discover this stuff? I am always so amazed.

~~~
guessmyname
\- Fuzzers are _(most of the time)_ a great bug finder,

\- Random bugs that someone reported leading to something bigger,

\- You already know the software well enough to dive inside,

\- Luck ¯\\_(ツ)_/¯

~~~
Jyaif
Finding 6 vulnerabilities you can chain is about very hard work, not luck.

~~~
guessmyname
I am referring to "luck" on finding the "initial" bug, obviously the rest is
hard work.

~~~
rodjun
Why would one be luck and all the others skill?

------
megamindbrian2
That didn't take long.

------
danjoc
>reward-100000

------
eugeneionesco
Beautiful chain of exploits, well done.

