With Trusted Types on, unsafe strings are disallowed directly at the unsafe sink level, ie innerHTML doesn't accept strings anymore, but instances of TrustedHTML. TrustedHTML can only be created by a Trusted Types Policy, and by isolating policies from user-generated and other untrusted content you guarantee that you can't have XSS holes.
* Note for the curious: This is how we're locking down lit-html so that it's completely safe from XSS. We have a simple policy that's only accessible to the template strings processor, so that the only strings trusted in an application are the template literals written by developers. All other strings will not be allowed at unsafe sinks. We don't even trust the other internals of lit-html. See https://github.com/Polymer/lit-html/blob/ceed9edc0aecdf82588...
Trusted Types will prevent a dependency or careless developer from setting innerHTML without going through a policy you've evaluated and decided to trust, but it doesn't have an HTML sanitizer, so for those cases a library like DOMPurify is still necessary.
Reminds me how CSP was seen as the glorious savior. But so far it only helps the big sites (Google, Facebook Twitter etc)
I would be more concerned of using server-side sanitizers due to the impedance mismatch between client/server HTML parsing algorithms.
What you said can be generically applied to every security control and which is why security is hard.
You're still catching entire classes of existing issues..
You're very close to understanding something.
(Though in defense of DOM purifiers they can use a whitelist)
Windows Defender is sufficient & bundled with Windows
Architecturally, however, it does the job but the challenge is integration with dated browsers. Polyfills for Shadow DOM inherently break the security features it provides.
Better cross-browser Shadow DOM support would be a step in the right direction to making things like DOMPurify safer, but unfortunately it seems like we're a while away from that according to Can I Use.
1. Don't resize dynamically to fit the email content, not unless you enable unique origin JS execution and do message passing to the parent window. But if you do that then you open the door to crypto-mining, tracking, spectre variants, and browser zero-days.
2. Don't play well with keyboard shortcuts since they steal keyboard events from the parent window when focused. Proxying keyboard events to the parent is even more dangerous since an attacker could then spoof keyboard events to control the parent.
3. Don't let you whitelist allowed HTML tags, attributes and CSS properties, which means there's no way to block email tracking.
And that's just for viewing email content. How would you sanitize and whitelist unsafe email content when replying/forwarding?
DOMPurify combined with CSP is safer and stricter. And if you wanted to, there's nothing to prevent you from putting the result in a sandboxed iframe once sanitized anyway. But it needs to be sanitized.
In a way, the situation is better client-side, because when running code on the client's side, you can check how exactly the browser parses the HTML code.
I mean, you're really just summarizing the presentation. It should be an API that's in the browser. It isn't. So people need to use a library. That's OK. But not great.
I think you meant to type that you can't sanitize in the "server"? Because with end-to-end encryption the server has no access to the plaintext to be sanitized. Only the client can sanitize, only the client has the plaintext.
The slides provide several reasons why server-side algorithms are worse.
"the situation is better client-side, because when running code on the client's side, you can check how exactly the browser parses the HTML code."
Yes, and for this reason, DOMPurify is a client-side sanitizer.
In fact, rather than bash the DOM, Mario wants the DOM to subsume his own DOMPurify project, rather than have users trust him as a third-party module developer. I think that paints the DOM in a favorable light if you ask me.
The context of "The DOM is a mess!" on slide 27 is specifically in terms of security, namely "DOM Clobbering" where an attacker can rewrite DOM methods from underneath you, and impedance mismatch owing to parser differences and bugs ("HTML elements implemented in completely different ways, different attribute handling" in the context of defending against XSS).
It's an honest assessment that's more a statement of fact than anything intended to be hurtful. It's not even a harsh statement of truth at that. I find it hard to believe that Chrome or Firefox engineers would find that offensive. I think they would well agree.
DOMPurify is really fantastic security work. It would make for a brilliant contribution to the DOM.
I don't see that as a valid security concern in this case. Yes, it will break your code or do unintended things. In order for this to happen an attacker must have access to the page in your user's security context, which means some other preventable security violation has already transpired. This applies equally with any application/language. Even if you could freeze the DOM such that nothing can be assigned to object properties then you might be able to ward off DOM clobbering, but there is still a malicious user in your security context reading all your secure and private details. If you prevent the malicious agent from access this security concern with the DOM is eliminated.
In other words whether or not DOM clobbering occurs a prerequisite security violation is necessary and hardening the DOM won't provide the necessary solution.
Aside from malicious third parties intentionally writing over event handler assignments DOM clobbering really comes down to poor code management, which is the real security problem here. That makes this a stylistic concern. Additional layers of concerns isn't going to make people instantly less lazy. There are better ways to solve for this.
> HTML elements implemented in completely different ways
HTML is not the DOM. These are separate and unrelated technologies that are maintained in very different specifications. This separation is not an accident. It is by design. I know this is a contentious point, about HTML and the DOM being far separated.
It is for sure a valid security concern when doing client-side XSS filtering, which is what the presentation is about. And no, DOM Clobbering does not require an attacker to "have access to the page in your user's security context". Fastmail have an introduction here: https://fastmail.blog/2015/12/20/sanitising-html-the-dom-clo.... Simply put, there's no way to do safe client-side XSS filtering without addressing DOM Clobbering as a valid security concern.
"hardening the DOM won't provide the necessary solution."
And the author is not suggesting or waiting for that. On the contrary, the premise is that XSS sanitizers need to be client-side exactly because the DOM is not hardened and has so many different implementations (even across browser versions). It's counter-intuitive I know, but server-side XSS sanitizers really can't address cross-browser parser differences safely. So again, it's not a question of "stylistic concerns" or "code management" but of doing secure XSS filtering wherever it is best done.
"There are better ways to solve for this."
And if you go on to the next slide, 28, the point is that despite the difficulties, this has been solved in DOMPurify, which should be added to the DOM so that developers can finally have a first-class client-side XSS sanitizer, without having to trust DOMPurify as third-party code.
There are not many people who know more about client-side XSS filtering than Mario Heiderich. And I know of no better client-side solution than DOMPurify.
No code injection is required. DOM Clobbering simply presents an ambiguous view of the content being sanitized.
Again, the problem here is injection, specifically HTTP injection. Email doesn't have an injection problem because it has a more robust protocol: RFC 2821, 2822 and their descendants. To make emails pretty somebody had the really bad idea of embedding HTML in email messaging. HTML is reliant upon the simplified architecture of the HTTP protocol. When you want that pretty content in email you make an HTTP request and some server issues a response.
If they simply took the HTML out of email this security problem would be instantly solved for email. Therefore this isn't an email problem. It isn't even an HTML problem. Its a problem of unregulated HTTP requests.
> HTMl is essentially just a serialisation format for the Document Object Model (DOM)
They are separate things.
The real problem is that lazy developers are punishing their users under pressure from business marketing leaders. There are two simple solutions to this problem:
1. Don't do stupid things that punish your users.
2. Create a web standard ACL that limits all HTTP traffic to/from a browser.
These are both sane and simple solutions. Nobody wants them because bad developers don't want to own the liability for implementing somebody (probably a marketing executive) else's bad decisions. Also, because an ACL standard in the browser would kill the web media business.