

COWL: A Confinement System for the Web - fla
http://cowl.ws/

======
mahmud
This has some heavy hitters behind it, both in authorship and their
organizations.

I wondered how it compares to Caja, and this is what their "Towards .." paper
states:

 _" Caja, ADSafe, and FBJS offer sandboxing by defining safe subsets of
JavaScript; these subsets tend not to support JavaScript’s full functionality,
and as retrofits atop existing browser interfaces, they are also vulnerable to
various attacks"_

~~~
hackerweb
Caja is something you can do server-side to filter JavaScript. COWL is a
modification to the browser security architecture. Hence, unlike Caja, a web
site cannot unilaterally deploy COWL. On the other hand, if COWL or something
like it makes it through the standardization process, it would have a far more
significant impact on web security than Caja.

~~~
nl
Caja is (more?) often deployed on the client side to filter Javascript.

------
zaroth
I like the page, and the example. The key is "privacy with functionality". You
can write some JS with certain code paths able to access outside resources,
but once you enter constrained code paths, you get limited basically through a
dynamically applied SOP.

Their example of a password strength checker -- say you download updated
bitmaps (bloom filters) from a central location which alert for risky
passwords being used. Now you can trust any code path which actually sees the
password will be locked down through taint detection.

I wonder if there's a basic mechanism, or if they have to identify every
possibly avenue for tainting? There are a lot of static locations you could
stash bits, so are you just denied access to all of that? Or somehow is it all
mirrored?

I want to make sure I really understand what they are saying can be trusted,
what privacy guarantees are provided? What exactly is sandboxed?

~~~
ezyang
Hello, one of the paper authors here.

One of the jobs that our implementation has to do is close down all channels
by which JavaScript could leak data: so this includes postMessage, the DOM,
XHR, cookies, local storage... fortunately, these overt channels are well
specified DOM APIs so from the implementor's perspective it's not too
difficult to lock them all down. (And we just deny access, there's no "shadow"
structure s involved.) There's nothing too fine grained going on: if you read
private data, everything it isn't allowed to flow to is locked down.

------
higherpurpose
I wish COWL went as far as this proposal for domain sandboxing:

[http://mortoray.com/2014/09/30/a-secure-and-private-
browser-...](http://mortoray.com/2014/09/30/a-secure-and-private-browser-
sandbox/)

I know Google would never agree to do something like that - unless Mozilla,
Apple (the new "privacy" company) and Microsoft (another one pretending to
defend user privacy) would put pressure on Google by adopting such a system,
together.

~~~
walterbell
> _Resources can still be included from a variety of domains, but the data
> they use is exclusive to this sandbox. A script from google.com on
> youtube.com would be completely isolated from the same script running at
> blogger.com._

1\. How would that work exactly? There's nothing stopping the script from
sending data back to a server for later association with another site.

2\. How would a site (or the user) authorize specific 3rd-parties to operate
within a given site? This problem is seen by anyone running NoScript today,
observing how sites break/work as a subset of 3rd-party scripts is
incrementally enabled.

3\. If servers are sharing data in real time with commercial 3rd-parties,
would they be required to disclose this to users? Publishers don't disclose
this today. NoScript allows users to identify 3rd-parties and block them. If
this is done on the server, is there a loss in transparency?

4\. In general, systems which implement strong isolation are immediately met
with user requests to relax this isolation in specific contexts. E.g. Apple
added inter-app communication in iOS8. How can "contextual whitelists" be
maintained for cross-domain risk management? Should these be determined by the
browser vendor, user, or server? Do we need a multi-stakeholder model like
CSS, with user preferences negotiated at runtime with publisher preferences?
What happens when proprietary DRM is running in the client?

