
‎Chromium Security‎: Site Isolation - tonyztan
https://www.chromium.org/Home/chromium-security/site-isolation
======
tonyztan
According to
[https://support.google.com/faqs/answer/7622138](https://support.google.com/faqs/answer/7622138),
"Site Isolation ... can be enabled to provide mitigation [against CPU
speculative execution attack methods] by isolating websites into separate
address spaces."

To enable, use chrome://flags#enable-site-per-process

~~~
finchisko
I don't get how site per process can mitigate these new attacks. It's not
about reading reading other process memory, but reading kernel memory, which
is mapped into every process.

I'm not saying "site per process" is bad thing. Of course it can help to
prevent some kind of attacks, just not these new cpu related.

~~~
zetafunction
There are three separate attacks.

Site isolation mitigates variant 1 of Spectre, which allows same-process
reads.

It doesn't protect against variant 2 of Spectre, which could allow cross-
process reads. While this is believed to be much harder to exploit than the
first variant, there are several mitigations in development:

\- Reduce the reliability of timing gadgets from JS

\- Compiler defenses like LLVM's -mretpoline

\- Intel's IBRS microcode update

As you mentioned, site isolation also won't help against Meltdown, which
allows disclosure of kernel memory: this requires the kernel page table
isolation patches.

~~~
zzzcpan
So, suggesting Site Isolation as a mitigation is a security theater from
Google to calm down some users, but it doesn't actually help anyone. The real
mitigation is disabling javascript by default, which Google can never suggest.

~~~
Certhas
No.

\- If you have an AMD CPU or run a Kernel with KPTI you are protected from
Meltdown.

\- If you have an AMD CPU or compile the browser with retpoline you are
protected from the second variant of Spectre (branch misprediction).

\- If you have site isolation you are protected from the first variant of
Spectre (bound check).

Thus, as it stands (and my understanding is that more variants will inevitably
be found), this feature alone mitigates the known attacks on AMD hardware.

Of course the real mitigation is to air-gap your computer and only run code
you have proven to be secure by hand. But Google can never suggest that. /s

~~~
dralley
Not true, the second variant of Spectre is harder to exploit on AMD, but
possible.

AMD is pushing microcode updates to close those holes, too.

~~~
kllrnohj
[https://www.amd.com/en/corporate/speculative-
execution](https://www.amd.com/en/corporate/speculative-execution)

"Variant Two

Branch Target Injection

Differences in AMD architecture mean there is a near zero risk of exploitation
of this variant. Vulnerability to Variant 2 has not been demonstrated on AMD
processors to date."

And although Project Zero had multiple AMD test machines they did not make any
claims that AMD was also vulnerable to variant 2. Can you link to any PoC that
has gotten variant 2 to work on AMD?

------
floil
I work on this feature. Ask me anything.

~~~
espadrine
As webapp developers, does recommending our users to enable it give any
meaningful protection for our website if we don't don't set Access-Control-
Allow-Origin (or only set it to a star)?

~~~
Ajedi32
> don't set Access-Control-Allow-Origin (or only set it to a star)

Huh? These are two _very_ different situations. Not setting `Access-Control-
Allow-Origin` would mean that no third party site can read data from your
domain. Setting it to `*` would do the exact opposite, allowing any site to
read data on the page.

It's also unclear to me how this relates to Spectre and Meltdown.

~~~
espadrine
The Site Isolation page[0] states:

> _This protection is made possible by the following changes in Chrome 's
> behavior:_

> […]

> _\- Cross-site "documents" (specifically HTML, XML, and JSON files) are not
> delivered to a web page's process unless the server says it should be
> allowed (using CORS)._

I understand it to mean that the tab's process used to hold code that verified
the CORS header, and that the verification is now out-of-process.

[0]: [http://www.chromium.org/Home/chromium-security/site-
isolatio...](http://www.chromium.org/Home/chromium-security/site-isolation)

------
feelin_googley
I notice that chromium.org does not require SNI.

SNI was created so that multiple SSL-enabled websites could share the same IP
address. (Perhaps because IP addresses are an undue expense for some website
owners, unlike, say, chromium.org.)

Assuming the the single IP address is not "anycasted" (another undue expense),
does SNI imply that multiple websites1 are in some way sharing a common
computer?

Could that be relevant with respect to "site isolation" in any way? Why or why
not? (For example, does the _server side_ , perhaps with multiple "tenants"
sharing the same hardware, pose a risk to the client concerned about
undesirable memory access?)

1 Presumably those websites are handling sensitive information hence the need
for SSL.

~~~
russell_h
SNI usually implies that TLS is being terminated for multiple domains on a
single computer, but the process actually rendering the HTML (in this case I
guess it is Google Sites) may be running on a different computer entirely.
It's difficult to infer too much from use of SNI: on the modern internet it
often just means the site is behind some kind of CDN which doesn't bother to
provision new IP addresses for each site.

In any case, the Site Isolation feature described here is a browser feature,
unrelated to how the sites are hosted. Site Isolation provides additional
isolation between different websites loaded in a user's web browser in order
to limit the scope of some potential exploits: for example, to make it more
difficult for malicious code hosted on example.org to access your gmail.com
credentials.

------
hunter2_
macOS 10.13.2 released Dec 6 includes a patch against something
meltdown/spectre related (see [https://support.apple.com/en-
us/HT208331](https://support.apple.com/en-us/HT208331) where it says "Entry
added January 4, 2018"). Does that mean Chrome Site Isolation isn't helpful in
mitigating speculative execution issues for macOS 10.13.2 users (although of
course it will still be useful for mitigating UXSS attacks)?

------
known
[https://en.wikipedia.org/wiki/Site-
specific_browser](https://en.wikipedia.org/wiki/Site-specific_browser)

~~~
dullgiulio
Chrome is by far the most advanced mainstream browser, from a security point
of view.

The discussions about "Chrome-only" sites are absolutely acceptable, but
totally off-topic here. Mozilla has also been working hard on Electrolysis to
have multiple processes. That's a very big security feature that should become
a must have for all browsers.

~~~
pcwalton
At this point, all major browsers use multiple processes.

~~~
floil
Yes, but site per process is harder: iframes need to be booted out of process
when they cross a site boundary.

To illustrate how much complexity this brings, consider a browser feature like
find-in-page. It used to work by traversing the document and descending into
subdocuments synchronously. It was basically a big for loop. With the page and
its iframes split across multiple processes, now we need a protocol and
interprocess communication to coordinate the find operation, and assemble the
matches together into a tree. The results of this async operation can arrive
in any order, and you need to filter out late arriving responses for previous
queries, when the user modifies the search string.

This pattern needs to be implemented, rinse and repeat, for hundreds of
subsystems. It was really hard.

