Edit: someone asked how this works..
1) it looks up the resource in the mapping (linked above) (matching the cdn and file path).
2) if found, it replaces it with the copy it includes:
So for those files, requests are never made to the CDN.
If the website uses a different CDN; a lib not recognized; or a version not recognized.. then the request is still made.
Last-Modified/If-Modified-Since is an optimisation trick which exists for the situation where the person running the website hasn't bothered to explicitly define expiry periods for content.
If course the P2P nature of the project means other people can find out exactly which of those resources you're looking at...
This sort of system helps against some sorts of snooping, but certainly not nation-state adversaries.
But these resources are probably already cached by the browser anyway (using the appropriate http headers). So how can this solution add any improvements to that, once the resources have been loaded for the first time?
I often using "private browsing" as a way to get another login session (e.g., login a test user while still having admin user logged in).
The obvious problem was that storing scripts locally got a bit out of control when considering having to store all versions.
* Privacy Badger
Clean links - removes redirects from search engines, facebook, twitter etc to hide a fact that you clicked a link, google doesn't know what link you clicked from search results, so if you block GA, it can't track you https://addons.mozilla.org/en-US/firefox/addon/clean-links/
HTTPS Everywhere https://www.eff.org/https-everywhere
uBlock Origin : https://github.com/gorhill/uBlock
uMatrix : https://github.com/gorhill/uMatrix
I have used Noscript, AdBlockPlus and Ghostery before but found they where lacking functionality, flexibility and performance.
I used Privacy Badger too but if I remember well, it is based on the same engine as ABP and suffers for the same performance problems.
What exactly does it do?
I've considered sharing parts of my global ruleset so others can just copy-paste the sections/sites they want to whitelist without having to discover what's required themselves.
Check out panoptoclick, to see your fingerprint.
Someone should make a series of common groups, and those who care should stick to just those.
I think this is a topic that gets discussed by (for example) the Firefox developers, but I get the feeling that this is one of the hardest problems to fix.
I would like to see a browser mode akin to the privacy mode most browsers feature that reduces the number of identifying variables (at the cost of features). So instead of telling the world that my time zone is CET and I prefer English (GB) as language, it would select a random time zone and locale (although this does inconveniently mean that sites might suddenly serve me content in Portuguese).
It'd have to be more like a VM running the OS with the highest market share (Windows), the browser with the highest market share (Internet Explorer), with the most common language used, with the most common time zone of users of the site you're accessing (varies by site and time of day), etc.
Anything else and you could stand out in the crowd. Using Linux or OS X, for example, really make fingerprinting easier for sites, which is quite disturbing.
Randomizing the values of certain attributes, as you've described, may help a lot if more people adopt it and make fingerprinting a futile exercise to those using it. :) If the people doing the fingerprinting see millions being successfully tracked with just a handful they're unable to track, they wouldn't even care. It's kinda like ad blocking. A few do it and it's not seen as a problem. If the majority does it, then the sites take notice. For a larger scale effect, browser makers should get into this. Mozilla, Apple, Microsoft and Google, in that order (with Opera somewhere in the middle), may be interested in thwarting browser fingerprinting.
* RequestPolicy: No longer developed, but still works for me
* RequestPolicy Continued
* Policeman: Haven't tried it, but AFAIK it also filters by data type (e.g., allow media requests from x.com to y.com, but not scripts)
Now I use Self Destructing Cookies, uBlock Origin and HTTPS Everywhere. That works just fine without taking the fun out of the web.
CsFire is the result of academic research, available in the following publications: CsFire: Transparent client-side mitigation of malicious cross-domain requests (published at the International Symposium on Engineering Secure Software and Systems 2010) and Automatic and precise client-side protection against CSRF attacks (published at the European Symposium on Research in Computer Security 2011)
//no 3p cookies
//less referer headers
From what I have read that can break quite a few things in Firefox, becarefule
* Better privacy
* uBlock with EasyPrivacy list
* Https everywhere
This looks amazing. Any particular setting one should be aware of or must change to keep things less annoying and smooth?
Chrome gives you the option to delete cookies on quit and has exceptions for whitelisting, is "Self destructing cookies" any different?
PS. If you need a favicon from any site for a plugin or otherwise, easiest way I've found is https://www.google.com/s2/favicons?domain=duckduckgo.com. Will grab it from wheverever the sysadmin decided to put it.
uMatrix (used to use NoScript, but I prefer this now)
There's really no reason for privacy conscious individuals to use Ghostery when uBlock Origin can do the exact same thing.
Here, I'll explain what I think it does so you can at least correct what I'm missing:
(1) User visits web site example.com and needs to get file foo.jpg from example.com.
(2) foo.jpg is available at some content delivery network, let's say Akamai.
(3) User's browser gets foo.jpg from Akamai.
(4) Akamai now knows the user's IP address, the Referer (example.com), and the user agent info (browser version, OS version, etc.)
So what does the Decentraleyes add-on do? I think it does the following:
First, this add-on apparently cuts out the Referer when the browser asks for foo.jpg, but Akamai would still get the IP address (and the user agent info unless the user is disguising that). With the IP address you've been tracked, so does this really help?
Second, this add-on apparently gives you a local copy of foo.jpg if it exists (i.e., a copy of foo.jpg already cached on your own computer). Well, the first copy of foo.jpg had to have come from somewhere (either example.com or Akamai), so you've already been tracked.
NOTE: I'm not criticizing the add-on at all! I'm just trying to understand it.
The extension has common files (jquery etc) requested included with it, and list of CDN curls that return those files. Every time browser makes request to those urls, the extension serves the local file back.
How does that help? It speeds up browsing, since you have local version of the requested file. It also increases privacy. For example many sites are lazy and use for example google cdn for let say jQuery. Now when you visit the site, google still can track you, because you make request to them.
The only weakness with this approach is that it only works for urls known to the author. Request to unknown CDN or even known CAN, but a new file will still be made (AFAIR there is an option to block unknown files on known CDN, but that will often break many websites)
With that said, I'm not 100% sure what it's doing for "normal" CDN files either. What you're saying sounds like a flaw, and I don't know if I don't see the obvious answer, or if you're right and that's a significant problem.
(2) The developer at example.com has included the version of jQuery from Google Hosted Libraries or another CDN so the request goes to Google's CDN.
(3) Google adds this request to what they already are tracking.
The addon includes a bunch of versions of popular libraries: https://github.com/Synzvato/decentraleyes/tree/master/data/r...
That would be bad if the content changed, but in some cases you can be sure it won't.
# cat /etc/hosts | grep google
MaxCDN's Bootstrap CDN implements it for example: <link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstra... rel="stylesheet" integrity="sha256-7s5uDGW3AHqw6xtJmNNtr+OBRJUlgkNJEo78P4b0yRw= sha512-nNo+yCHEyn0smMxSswnf/OnX6/KwJuZTlNZBjauKhTK0c+zT+q5JOCx0UFhXQ6rJR9jg6Es8gPuD2uZcYDLqSw==" crossorigin="anonymous">
Checkout the source here: https://decentraleyes.org/test/
It would be much better to serve jQuery from the decentraleyes.org domain and run the test with that.
tl;dr: It needs jQuery from Google to test if jQuery from Google can be loaded via $.ajax.
Doesn't the browser do that automatically for you?
The reason for this is that by installing various non-default addons, you're actually making your browser more unique. As a consequence, you're making it easier to link all of your Tor activity back to a single person.
Question: where do you get your info from? I'm trying to gather twitter lists into this repo to know the best sources of info.
Please collaborate: https://github.com/davidpelayo/twitter-tech-lists