
Escaping the Chrome Sandbox with RIDL - tptacek
https://googleprojectzero.blogspot.com/2020/02/escaping-chrome-sandbox-with-ridl.html
======
xenadu02
I wonder why their Mojo system doesn't use mach ports on darwin platforms? You
can pass port rights along to another process but they aren't forgeable.
Unless the renderer process itself has a right to send messages to the network
process it simply can't, no matter what it knows.

A process can also disinherit its own bootstrap port, preventing it from using
most services except for port rights it already holds.

~~~
comex
Using kernel-provided capabilities (like Mach ports or Unix file descriptors)
directly might have too much overhead if the design uses a lot of them. But
even if that’s true, it should be possible to implement unforgeable
capabilities in userland using a trusted broker process.

It’s a classic design mistake to represent IPC access rights using secrets
instead of capabilities. Secrets _seem_ more convenient to work with: they’re
just data, which can be encoded and transformed like any other data, whereas
capabilities need to be kept separate throughout multiple levels of
abstraction, from your IPC wrapper functions down to the low-level message
sending primitives.

And secrets are _theoretically_ secure, if you do everything right. After all,
in the related domain of network services, there is no trusted broker to
represent capabilities, so you have to use secrets in various forms, like MAC-
ed and encrypted cookies, or TLS certificates. And it mostly works out in
practice.

But secrets are risky. Even with network services, you have the fundamental
downside: the security model is compromised if an attacker can merely _read_
data they shouldn’t, like the MAC key or the TLS private key, rather than
having to _modify_ data. That greatly increases the impact of vulnerabilities
that only allow reads – especially if, like in this exploit, you can only read
a small amount of random data, rather than whatever data you want.

When it comes to IPC, using secrets is even riskier for multiple reasons.
First, an attacker is very ‘close’ to the target, running on a different
process in the same machine, which means they’re in a much better position to
try to leak memory using side-channel attacks. Although the full power of
hardware side-channel attacks has only recently been exposed, weaker side-
channel attacks have been a known threat for a long time. There are also pure-
software timing side channels, i.e. where the software does a different amount
of work depending on a condition, which would let an attacker guess the value
of the condition even if the CPU itself executed instructions in constant
time. Second, IPC communications protocols tend to be lower-level, e.g. using
shared memory, and the trusted side is often written in an unsafe language
like (as here) C++. In contrast, network protocols and servers can typically
afford to be somewhat higher-level because the network itself introduces a
bunch of overhead anyway (something that also makes them more resistant to.
But low-level means greater risk, especially for low-level vulnerability
categories like memory disclosure (or, for that matter, memory corruption).
Not that network services can’t be vulnerable to memory disclosure – consider
Heartbleed or Cloudbleed – but it’s more likely with IPC.

Third, without getting too far into the weeds, the objectives are often
somewhat different. With a network service, leaking random data is often a
pretty good attack by itself, especially if the service directly handles data
for many users. With IPC, the attacker usually needs to use a separate exploit
to even get in position to interact with the IPC interface, so ‘only’ being
able to read data is sort of a waste; you really want to escalate privileges.
This is only a rule of thumb and isn’t always true – sometimes network
services are only accessible after compromising a different network service;
sometimes IPC is intentionally exposed to untrusted code. But it’s a factor.

The most important difference, though, is what I already said: with IPC you
don’t _have_ to use secrets. They’re risky in both network services and IPC,
but with IPC you can use a trusted capability broker instead, and so you
should. Capabilities are a bit less convenient, not being pure data. but
precisely for that reason they’re harder to screw up, harder to leak by
accident.

------
tptacek
This is an excellent writeup, even by Project Zero standards.

~~~
brian_herman
Thanks for sharing! Your posts are awesome as usual!

------
big_chungus
Another issue caused by hyperthreading. It's starting to look more and more
like the OpenBSD guys made the right choice by turning it off. At this point,
I think the only other option is for OS guys to implement some kind of
security-aware scheduler, where threads can request secure execution on a
dedicated processor. Though I suppose it might make more sense to execute on
isolated processors by default and have threads explicitly permit unsecured
execution for non-critical stuff.

~~~
pjmlp
Indeed,, the multi-process model is the only way to go for security conscious
code, and also as mechanism for sandboxing plugins, not only threads.

------
est31
> At the time of writing, both Apple and Microsoft are actively working on a
> fix to prevent this attack in collaboration with the Chrome security team.

Why is Apple a contributor to Chromium? I've thought they still use Webkit? Or
maybe the bug exists in Webkit too?

~~~
Santosh83
I'd assume they're working at the OS level since part of the exploit involves
the hardware (hyperthreading)...

~~~
saagarjha
> To protect against attacks on affected CPUs make sure your microcode is up
> to date and disable hyper-threading (HT).

I wonder what they can honestly even do beyond disabling hyper-threading by
default…

~~~
baybal2
> However, the browser process is also using URLLoaderFactories for different
> purposes and those have additional privileges. Besides ignoring the same-
> origin policy, they are also allowed to upload local files. Thus, if we can
> leak one of their port names we can use it to upload /etc/passwd to

My take on it. Chrome has just too many things it uses internally that's
exploitable if one finds a a way to trick the sandbox.

The entire point of sandbox seem to be lost when so many stuff happens inside
it.

Kick unneeded stuff out from rendering processes, reverting to the original
design for them. "The renderer" in their design does way to many things other
than just rendering.

Again, WASM will be a bane no matter how you secure it. JIT compilation of
unsecure code is a bane. ASMJS has been a source of multiple exploits, it's
poorly maintained, it must be axed.

~~~
pjmlp
WebAssembly advocates keep hand waving the fact that internal memory state
inside modules is open to data corruption, because bounds checking only happen
at the edges of linear memory segments.

Something like Heartbleed is not preventable on current WebAssembly sandbox
model.

So I just keep expecting for the first wave of Flash like exploits to
eventually come in, when WebAssembly starts being a valuable target.

------
saagarjha
An interesting fuzzer called fuzzilli two links in:
[https://github.com/googleprojectzero/fuzzilli](https://github.com/googleprojectzero/fuzzilli)

~~~
big_chungus
Swift is an interesting language choice for something like this. As far as I
am aware, support is non-existent outside of Ubuntu and OSX, so it seems
limiting compared to, say, C.

~~~
cosmojg
> As far as I am aware, support is non-existent outside of Ubuntu and OSX, so
> it seems limiting compared to, say, C.

If it works on Linux, why would it need to work anywhere else?

~~~
pjmlp
I don't know, maybe just maybe, because not everyone uses GNU/Linux?

------
Santosh83
Any reason P0 doesn't seem to direct some time and energy towards finding zero
days in Firefox? Are they mandated to only investigate codebases of interest
to Google?

~~~
kingkilr
(Former Firefox Security Engineer)

I suspect it's because Firefox exploits have looked the same for the last
several years -- there has not been a lot of novelty required to implement an
exploit, given an arbitrary read/write primitive.

P0 does report vulnerabilities to Firefox though, and they obviously get
fixed, they're just not particularly interesting to exploit.

~~~
saagarjha
> I suspect it's because Firefox exploits have looked the same for the last
> several years -- there has not been a lot of novelty required to implement
> an exploit, given an arbitrary read/write primitive.

Surely other browsers do not differ from this significantly?

~~~
tedunangst
An arbitrary write primitive in the chrome render process hasn't been game
over for quite some time.

~~~
saagarjha
I mean, is it on Firefox?

~~~
tedunangst
Until a year ago(?), yes.

------
brian_herman__
I love the work project zero does but I wish google would do some basic
security things would android like streamlining the patch system to get to
phones and providing more support for their older phones with security updates
and more stringent security requirements for android. I feel like project zero
is really good PR for them but they could really do more for the security of
their apps and phones :/

