
On the Insecurity of Whitelists and the Future of Content Security Policy [pdf] - nsgi
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45542.pdf
======
exDM69
Argh. The CSP acronym has way too many meanings in computer science... Of the
top of my head:

    
    
        * Constraint satisfaction problem
        * Communicating sequential processes
        * Content security policy (topic of this article)
    

I'm sure there are more that I can't name right away. Not a great idea to use
the acronym in a paper topic.

~~~
nsgi
Point taken. Have updated the title to make things clearer.

------
okket
Previous discussion:
[https://news.ycombinator.com/item?id=12408328](https://news.ycombinator.com/item?id=12408328)
(9 days ago, 12 comments)

------
gorhill
> The authors conducted a large-scale study of browser extensions from the
> Chrome web store and found that many extensions tamper with the CSP of a
> page. Hence, they propose an endorsement mechanism that allowed an extension
> to ask the web page for permission before changing the security policy.

Terrible idea, and scary. A user agent should never requires permissions from
a site for its served content to be modified.

At most I rather propose having a new permission in the extension API to
inform the user that an extension requires the ability to relax a Content
Security Policy (further restricting the effective Content Security Policy
would not require any special permission).

------
ThePhysicist
Personally, after working on threat intelligence for almost a year I think the
traditional indicator-based approach will be dead soon, as there are just too
many ways to evade detection by mutating the indicators. Currently it's still
a bit of a hassle to register new domains and IP addresses, but with the
advent of IPv6 it will be easy to have hundreds of thousands of addresses for
e.g. a C&C server -possibly even a unique one for each malware payload- which
will render IP-based blocking very ineffective.

A similar trend can be seen in malware analysis, where already today hashsum-
based approaches are only able to catch commodity malware, and are nearly
useless to detect anything more sophisticated.

I thereforethink the future of content-security will rely on behavior-based
methods and flow analysis, as those are harder to fake or mass-produce.

On the other hand, Google is in a unique position as they see a very large
fraction of both the Internet traffic and e-mails being sent, which should
allow them to detect new types malware much faster and more efficiently than
almost anyone else. Really makes me wonder why they don't have their own anti-
virus product yet (which seems to be a good way to spy on people's computers
and gather more user data as well).

~~~
unethical_ban
Hashing definitely is a coarse filter - and it is still useful, because there
is still a lot of low-effort garbage that can harm your org.

But about IPv6, the smallest subnet is a /64\. So if you're getting hit by an
IPv6 malicious host, block the /64\. Depending on the provider and known
allocation, it could be a safe bet to block as far up as /60 or /56, even.
That, along with the fact that only a tiny fraction of IPv6 is allocated at
all yet, and it's not /as much/ a problem as one would think, given the "more
addresses than atoms" line people hear.

------
spinningarrow
For others wondering - this is about Content Security Policy and not
Communicating Sequential Processes (as I'd unwittingly assumed).

------
kijin
CSP is way too ambitious and complicated for its own good. It's like SELinux
for the web. It's one of the best examples of a pursuit of perfection getting
in the way of what's good enough.

What's a CSP one-liner that you can copy and paste into any webpage to prevent
execution of all inline scripts, regardless of what external scripts, styles,
fonts, and other resources it loads? I suppose it will look something like
'default-src *', but even this most basic usage (and its possible side
effects) doesn't seem to be clearly documented in any authoritative guideline.
All the documentation is so obsessed with whitelisting this and that CDN,
blocking non-https scripts (don't all browsers already do that?), and all
sorts of other tricks that they scare away someone who is just looking for
some baseline level of protection.

If, instead, we just defined a simple header, meta tag, or even an HTML
attribute to block run-of-the-mill XSS attacks on a typical webapp or open-
source CMS, the Internet might have become a much safer place already.

