
Google Chrome is ditching its XSS detection tool - wglb
https://nakedsecurity.sophos.com/2019/07/18/google-chrome-is-ditching-its-xss-detection-tool/
======
koto1sa
Disclaimer: I'm working on the Trusted Types project in Google.

To clarify, Trusted Types are not a replacement for XSS auditor. They are both
related to XSS, but are fundamentally different and even target different
flavors of XSS.

Trusted Types are an opt-in browser API that helps developers prevent DOM-
based (~client-side) XSS by mandating that developer-specified rules are
applied to data that reaches risky functions (like eval or innerHTML). We're
working on having it available as a proper W3C spec. More info at
[https://bit.ly/trusted-types](https://bit.ly/trusted-types) or
[https://youtu.be/1KQngEZ8qH4](https://youtu.be/1KQngEZ8qH4)

XSS auditor was an opt-out Chrome only feature that tried to stop reflected
(~server-side) XSS payloads from executing after the injection has already
happened. It was an now outdated concept. The idea was nice - prevent XSS
without changing a bit of code in your application, but now we know this just
doesn't work for xss.

~~~
lol768
> _Trusted Types are an opt-in browser API that helps developers prevent DOM-
> based (~client-side) XSS by mandating that developer-specified rules are
> applied to data that reaches risky functions (like eval or innerHTML). We
> 're working on having it available as a proper W3C spec. More info at
> [https://bit.ly/trusted-types](https://bit.ly/trusted-types) or
> [https://youtu.be/1KQngEZ8qH4*](https://youtu.be/1KQngEZ8qH4*)

Can you expand a little on what Trusted Types gives you if you already have a
strict CSP which prevents unsafe-eval/ unsafe-inline and e.g. has a 'self'
base-uri set?

I'm assuming it would prevent you injecting other HTML if a DOM-based
vulnerability existed.

~~~
koto1sa
Trusted Types aim to prevent the injection, XSS-y CSP directives (script-src
etc) act as an XSS exploit mitigation that fires after the injection is
already there. So, for example, even if the JS execution is stopped, the
attacker may still exfiltrate the data via form tags etc.

CSP was traditionally deployed by security folks only and is a bit
disconnected from the regular dev workflow. For example, if your application
does not conform to CSP (i.e. you have an XSS), you can know that only when
you deploy it, and the violations keep coming.

TT are part of your JS program, and are much closer to how developers prevent
other bugs in their programs. For example, since the API uses types, you can
even set up your build for the application not to compile if innerHTML is used
with a string.

Additionally, TT allow the application to be a bit more precise - e.g. maybe
you can't refactor the application not to use eval() ever (this is
surprisingly common), but would rather make sure that this one eval() instance
is secure, and disallow all others. TT solve that elegantly. Your reviewed
eval starts using eval(TrustedScript), and all other evals - should they exist
- are blocked.

In our experience rolling out CSP, its nonced version (w/ no script-dynamic,
no unsafe-*) works well for server-side injections, whereas TT cover the
client-side better.

[https://github.com/WICG/trusted-types/wiki/FAQ#do-i-need-
tru...](https://github.com/WICG/trusted-types/wiki/FAQ#do-i-need-trusted-
types-when-we-have-a-content-security-policy)

~~~
lol768
Thanks for a very comprehensive answer!

------
lol768
Good. Whilst the XSS auditor was able to protect against quite a wide range of
payloads for reflected vulns, I think it caused more harm than good. Quite
often I'd have to try and argue that an issue was real because the PoC didn't
work in Chrome. Often I was able to find a way you could cause the server to
modify the payload such that the browser was unable to match up the input and
output, and then the payload would execute - but I think the legacy of this
feature is that it has caused developers to take rXSS much less seriously.

------
TomAnthony
Interesting! I recently found that Googlebot was susceptible to XSS [1], but
it was mitigated by the recent updates (they updated Googlebot from Chrome 41
to the most recent version).

I guess a lot will depend on the new Trusted Types API. If it is opt-in then I
imagine there will be period whilst it is adopted - what happens during this
time?

[1] [https://www.tomanthony.co.uk/blog/xss-attacks-googlebot-
inde...](https://www.tomanthony.co.uk/blog/xss-attacks-googlebot-index-
manipulation/)

~~~
lol768
Nice write-up! It's a shame that your report wasn't handled with the usual
prompt response & fix.

------
fluxsauce
Scary headline; FTA:

> Don’t worry, though – another, hopefully better, protection measure is on
> the way.

> Another feature is in development to help: an application programming
> interface (API) called Trusted Types. Trusted types treats user input as
> untrustworthy by default and forces developers to take steps to sanitise it
> before it can be included in a web page.

A better headline may be "Google Chrome replacing XSS Auditor with Trusted
Types"

~~~
lol768
> _A better headline may be "Google Chrome replacing XSS Auditor with Trusted
> Types"_

As I think a Googler has mentioned above, the XSS auditor is for _reflected
XSS_ vulns (caused by the server unsafely outputting user input). Trusted
Types protects against DOM-based vulns, which are more client-side.

------
commandlinefan
> Websites should prevent this kind attack

I've had far too coworkers argue that we shouldn't "waste time" preventing XSS
attacks _because_ Chrome already detects and prevents (most of) them.

~~~
sieabahlpark
Your coworkers aren't too smart

~~~
acdha
This comment doesn't really contribute anything — and it makes a common
mistake of treating this as an issue of intelligence when it's more specific
security experience and awareness. In particular, I've seen a pathology around
XSS where part of the problem is that someone _is_ pretty savvy and downplays
the risks thinking “well, I'd never set things up so that would work”,
ignoring all of the times they've made mistakes, used a library or vendor
product, had projects change hands, etc.

------
maxwellito
It's interesting to hear, knowing the the lack of XSS Auditor on Firefox for
years was a recurrent topic on Bugzilla. Even IE and Safari got similar
protections. Curious to see if the XSSs I found over the years will now work
on Chrome.

~~~
koto1sa
Well, the age of in-browser reflected xss filters is simply over.

This was a flawed idea for multiple reasons, and they are thankfully now gone
from Chrome and Edge ([https://portswigger.net/daily-swig/xss-protection-
disappears...](https://portswigger.net/daily-swig/xss-protection-disappears-
from-microsoft-edge)). I guess from modern browsers only Safari still has one.
There are modern XSS defenses you can use, XSS filters just did not stand well
the test of time.

I would state that it is good if your XSS works in Chrome, just like it works
in Firefox. That XSS should be fixed by the website, instead of that website
owners neglecting the fix and assuming they're protected because the alert
doesn't fire in Chrome. Especially given that if the attacker spends a bit of
time tailoring the payload, they may bypass the auditor.

~~~
vunie
> That XSS should be fixed by the website, instead of that website owners
> neglecting the fix and assuming they're protected because the alert doesn't
> fire in Chrome.

What if a website doesn't fix its XSS vulnerabilities and continues to spew
attacker-controlled content? I don't think it will help users to base browser
security on "shoulds".

I've been taught that security should be built in layers. Removing a
functioning albeit not perfect layer for no good reason is baffling to me.

------
josteink
From a cursory glance, the replacement (Trusted Types), just seems like a very
cumbersome runtime type-safety engine kind of thing, shoehorned into a JS/DOM
context.

Is that a correct understanding?

If so... Why use this cumbersome API instead of just relying on existing and
well-implemented concepts as already present in Flow or Typescript?

That way you could have Flow or Typescript also compile to whatever form this
required to have.

Or am I missing some key bits here?

~~~
m12k
This isn't a programming language type safety mechanism, it's a way to tell
the browser to enforce that you can't run code that could potentially be
vulnerable to XSS, except in specific places that you designate with policies.
This example was illuminating for me:
[https://developers.google.com/web/updates/2019/02/trusted-
ty...](https://developers.google.com/web/updates/2019/02/trusted-
types#trusted-types)

~~~
josteink
Tomato/tomato.

You are effectively saying that only certain _types_ of strings and DOM
structures are allowed on certain properties and attributes.

This is perfectly encodable in a type-system. I don’t see a reason to dismiss
that observation outright.

------
jammygit
I learned about Xss mitigation 3-4 years ago through google gruyere, a
tutorial that came with a sandboxed environment to practice in [1]

Where did others learn about XSS mitigation? Any resources to recommend?

[1] [https://google-gruyere.appspot.com/](https://google-gruyere.appspot.com/)

------
lallysingh
The title is wrong. Google's not ditching anything, it seems.

