If browsers were serious about cross-site content issues, Google Ads wouldn't work. Google insists in their policies that their ads must not be placed in IFRAME blocks,[1] where they can't see the page context. Yet, for security, you want any outside content that executes code sandboxed in an IFRAME.
At some point you have to leverage 3rd parties in order to get your business to function. To me, letting an ad company run javascript on your page is something akin to letting hired contractors in the real world stroll through your office unwatched. Yes its' a security concern, but it's also necessary to operate a business.
In reality, we don't have a lot of truly secure facilities, but instead rely on legal process to stop bad actors, and make people whole after the fact.
On the web, there's always been a push for total security, which I don't think is useful nor particularly realistically implementable.
> is something akin to letting hired contractors in the real world stroll through your office unwatched. Yes its' a security concern, but it's also necessary to operate a business.
Nope, it's entirely possible to have the accompanied by a member of physical security the whole time. The question is always how valuable your security is vs the cost of maintaining it.
100% agree. If Google is serious about CSP, show me where they document the CSP that doesn't break AdSense or Analytics. It's a moving target, AFAICT, and the reason why I don't use CTP in production (at work).
The idea behind CSP based on nonces (as opposed to the "old" approach of using whitelists) is that you add the valid nonce token only to the script directly sourced from your page, and trust propagates to other scripts loaded dynamically by the "loader" script. This way you no longer have to care about what domains the widget uses -- if you trust the initial script, give it a nonce and it will execute, along with the subresources it needs.
Of course you can still have a domain whitelist or use Subresource Integrity if you're loading scripts from potentially untrusted infrastructure. But the nonce-based approach is meant precisely to avoid the "moving target" problem you mentioned.
That's a very Google idea, and it's a bad one. Malware distributed through Google's ad system has been an ongoing problem.[1][2] Letting Google's advertiser customers inherit Google's trust is a terrible idea.
Google needs to put all their ads in IFRAME sandboxes. Tightening up on what an IFRAME can do wouldn't be a bad idea, either. No popups, no self-starting video, no absolute positioning outside the clipping pane, no expanding the frame size...
This tool they just released, is hopefully helpful, and will help site administrators craft specific CSPs for specific parts of their site -- other, more generic tools already exist.
On the HN thread on the cited study, I posted [1] that C-S-P is 'another damn header' that has to be included to stay secure and, unlike many of the 'other damn headers', its value is hopefully fine-tuned to the particular protected resource, unlike a site-wide hardcoded string.
I think more so than another configuration helper tool, what the Web really needs is a CSP rule engine evaluator that allows rules to be specified declaratively ahead of time, and integrates with some existing web framework to allow the resulting C-S-P value to be spliced into the outgoing response. Portions of this approach are implicitly proposed by OWASP here [2], but I've yet to see it written down formally, as opposed to just some code example. Widely adopting this approach would result in a paradigm shift that lifts C-S-P from 'just a header' to a first-class construct integral to the operation of the web application.
CSP is another baseline config that web application developers consistently don't include for whatever reason. It could be plain ignorance, but I think it goes deeper than that: I think CSP is too specific for the larger problem at hand which is Javascript itself. If I want to perform XSS on a site, I will find a way. There are still unpatched SVG vectors I can use in Chrome which have gone un-noticed for the longest time, and they will, can, and are being used today. There's just too many code paths in browsers to exploit, and CSP only partially addresses the problem. I'm still seeing TrueType libraries from the 90s executing arbitrary code in browsers, and it's 2016.
> There are still unpatched SVG vectors I can use in Chrome which have gone un-noticed for the longest time, and they will, can, and are being used today.
If you know of exploitable XSS vectors in SVG implementations, you should report them. Not only would you get some nice big bug bounties, you'd, you know, close XSS vulnerabilities for hundreds of millions of people.
I have been tempted to go down the bug bounty route, but in this particular instance I might get a small win, but not contribute to the larger problem of browsers and javascript themselves. Browsers are a teeming big ball of complexity and rather than patch and forget, I would rather stick to a single duty, stripped down program like Lynx, or a hardened version of Firefox with heavy about:config tweaks. And of course, Javascript turned off at all costs. I routinely ask people on public forums to switch to Firefox and block JavaScript, and I am a firm advocate of Lynx for surfing text-based websites. I might not be able to view some websites, but that's on them. It's a webmaster's job to make a website accessible, not mine.
That's all great and dandy for you and the few people inside your circle of influence but there's a bajillion people on the internet, using all kinds of devices, for whom this is impractical to impossible.
Even getting this kind of attack mitigated in one of the major browsers makes this a much less appealing thing to try and exploit and raises awareness of the problems potentially improving everyone's security.
Putting this kind of responsibility on others makes you part of the problem, instead you could help in being the solution. That's not to say webmaster shouldn't be doing a better job at this, but sometimes their abilities are limited. I'm surprised that anyone would be willing to then as a consequence let this be inflicted on others if they had the ability to do something substantial about it.
Yeah, I'm going to go out on a limb and suggest that you haven't really looked into your ideas for these attacks, and if you did you'd discover they weren't actually exploitable.
No, I stated I took out entire classes of attacks by using a single duty browser like Lynx and a hardened version of Firefox with JS disabled. Rather than patch and forget, I addressed the larger problem head on. The last thing a browser vendor wants to hear is a user complaining that JavaScript is enabled by default. There is a vested interest in having JavaScript all pervasive in browsers now, and huge lobby groups campaigning for a JavaScript only web, and this is very counter productive. Of course I can exploit Chrome and those exploits do work. My issue is that even if I report them, another one will popup because the design of Chrome (and Firefox) is fundamentally flawed from the very outset. Complexity is the enemy of security, and the onus is on the user to mitigate, not always on the vendors, or the bug reporting ecosystem, or even the bug bounty programs.
[1] https://support.google.com/adsense/answer/3394713?hl=en