- when will you stop twitter from knowing all about youporn's user ? yes HN user, open your developer console and look out for the cookie coming from syndication.twitter.com. If you ever try to visit http://syndication.twitter.com, you end up on "Sorry, that page doesn’t exist!".
- when will you ask for user permission before revealing hardware related information (via the screen object and navigator object)? I can't think of any use case to reveal how many core my computer has when js only use 1. Same for the screen object as we already have the window object.
- when will you create a feature to dynamically update the user agent ?
- when will you create a feature to trick browser fingerprinting ?
- when will you enforce a no third party cookie by default? Especially for website that don't comply with the do not track header. Unfortunatly, our entire industry don't seem to care about the do not track header (funnily google is the rare good student in this area)
I guess there's a lot more "industry standard" I'm not aware.
That's not true with web workers. And "How many workers should I spin up?" is a common question people actually using them want to answer...
I'm highly sympathetic to reducing fingerprinting attack area, by the way; we should just be clear that providing sites less information does mean they can't do various performance optimizations, and in some cases can't even provide correct functionality. That tension is at the heart of all the anti-fingerprinting efforts browsers are involved in...
Asking users for permission isn't really a solution either, unfortunately: all the bad actors will just spam permission prompts continuously. I've browsed before in a "prompt for attempts to set cookies" mode, back when Firefox had this feature. It wasn't pretty, even in the mid-2000s.
> when will you enforce a no third party cookie by default?
When it can be done without breaking too many users' day-to-day browsing.
> Especially for website that don't comply with the do not track header
How does one go about determining that? This is an honest question; I'm not aware of any database of sites that classifies them along this dimension.
Dicslosure: I work on Firefox and various people including myself have been pushing for various anti-tracking bits for a while now. Some of them have shipped; others are in the works.
>> when will you enforce a no third party cookie by default?
> When it can be done without breaking too many users' day-to-day browsing.
Doesn't Safari do this already? Isn't that proof that millions of users (every iPhone, iPad, and greater than 30% of all mac users) are having no real issues using the web with that in place?
Not quite. For example, they might be on different networks, have different zoom levels set, etc.
> Doesn't Safari do this already?
Last I checked, Safari "blocks" (not quite; it's double-keying, not actually blocking) some third-party cookies, but not all. In particular, it doesn't block third-party cookies from sites you have visited in a first-party context. So for example it doesn't block Facebook or Google trackers.
The Firefox "no third party cookies" mode, which actually blocks all third-party cookies, is quite different from the Safari behavior...
For the former, any amount of fingerprinting lets ad companies make the case that their ads are better or worth more money. For the latter, the EFF Panopticlick research project seems to be pretty good at giving you an idea:
I don't have an iPad/iPhone etc. to test Safari with; seems like they'd be amongst the most homogeneous of devices. Perhaps someone with one of those devices could tell us if the EFF can uniquely identify their device. It was able to uniquely identify all of my devices.
I've had 3rd party cookies disabled for years. I can count the number of times this has caused trouble on one hand.
It also turns out that this varies a _lot_ based on browsing patterns. For example, I never use the "log in with Facebook/Google/etc" things. People who do report a lot more problems from blanket third-party cookie blocking.
- even the W3C get trap (https://pbs.twimg.com/media/DWtqPzFUQAExO2h.jpg) probably without their knowledge but still that's the W3C ...
- twitter tracking youporn's user considering their track record when it comes to database leak. The damage that can be done here from a political perspective would be absolutly terrible.
- ..... many other examples as you probably know a lot more
> That's not true with web workers. And "How many workers should I spin up?" is a common question people actually using them want to answer...
It would be common when it comes to create real world complex apps which isn't what most of the web is about (I know 1 person isn't very representative). Those questions are legit but feel to me as an edge case, not the general rule considering the market share of wordpress and co.
I would love to have a popup when it comes to reveal information that should be considered as edge case. If a newspaper need access to this information, there's small chance it benefits my experience but more the complex network of advertising and even sometimes some real time bidding system that have performance imperative for which web workers are a great fit
> Asking users for permission isn't really a solution either, unfortunately: all the bad actors will just spam permission prompts continuously.
As I see, It doesn't mean the solution itself is wrong but rather a correct compromise hasn't been fount yet. There's a world between a website creating a cookie for legit purposes and a dependencies of this website that rely on another one that rely on another one.
> How does one go about determining that? This is an honest question; I'm not aware of any database of sites that classifies them along this dimension.
Considering some actors like Cloudflare that track a big part of internet, a database approach is broken by design. As I see it, the only approach that can work is behavior based: if a third party website is forcing a cookie despite having a do not track header, it's shady and thus shouldn't be acceptable (eg: twitter, facebook and a lot more as the "industry standard" is to ignore the DNT). What about you'd do like Chrome did by forcing HTTPS enverywhere, by bringing a message "this site has unfair trackers" instead of "this site isn't secure"
> reducing fingerprinting attack area, by the way; we should just be clear that providing sites less information does mean they can't do various performance optimizations,
Not all the time, I can think of a few things that shouldn't have such an impact if implemented well:
- I've read a bit around fingerprinting using canvas and webgl. Why not adding some sort of randomness that are pretty much invisible to the eyes of a human but put those type of algorithm innefective?
- don't make the navigator object global across all pages and make a few tinny changes everytime. For example Firefox provides the buildID as part of the navigator object. Mine is on this machine: "20180327223059". Will it really break the world if some existing website receives "20180327223434" while another receive "20180327224387"? If somebody is making such optimisation as to know the exact time the build was created, it sounds shady and creepy at best.
I understand some optimisations aren't possible without braking a lot of website for which your users will mostly wonder why they can't navigate it properly but why not creating a different mode, let's say a safe mode that expressly say some website might appear weird but that's only because you're taking active measure to block any sort of trackers? As of today, the private navigation isn't really effective at protecting against fingerprinting
privacy.firstparty.isolate = true
privacy.resistFingerprinting = true
> when will you create a feature to dynamically update the user agent ?
Extensions can do that. Not everything needs to be in the core browser, does it?
I think those features should be build by browser vendor. Why? Because pretty much all the existing extensions claiming to "protect privacy" are doing a poor job at it. They all rely on an internal database of "bad guys" as if it's technically possible to even build such a database.
For example, Cloudflare already control a large amount of website through their CDN. They are sending cookies regardless of the do not track headers. I haven't seen 1 of those extensions blocking website using Cloudflare. The do not track header is also a funny joke that even the W3C doesn't respect (https://pbs.twimg.com/media/DWtqPzFUQAExO2h.jpg)
Not all do. uMatrix comes with a whitelist-based approach, so all 3rd party requests (modulo images and CSS if you so choose) are blocked and need to be whitelisted to unbreak sites.
Thank you for proposing uMatrix it just pass the cookie test none of the above succeed, I might have found my new internet condom.
Optionally, unrelated to security you can try "Dark Reader" extension, which makes all sites darker ^_^ Especially useful with darker themes.
In another comment you say:
> on Chromium and Chrome there's a nice feature to block third party cookies. For obvious reason, it's not enabled by default but it's there and doesn't affect the performances.
Why is this a problem when we are talking about Firefox but it's "obvious reason" with Chrome?
Or am I misreading you?
I've also had good effect with Canvas Blocker. Decentraleyes is another good concept privacy add-on to look into as well.
I've been using the Containers plugin and its predecessors for years to isolate facebook, but I've always liked it because when someone links to facebook content I can then read it logged out in my normal tab. Obviously on top of this I am using ublock/umatrix anyway.
It would be counter intuitive if this is how the standard Containers extension worked, but for this specific extension, you are installing it to isolate Facebook, so I think this is how people would expect it to work. If the extension did not open Facebook links in a dedicated Facebook container, then I think the extension would not be doing its job correctly.
To automatically open Facebook links in a logged-in, dedicated-to-FB container is the entire point of this extension. After all, it was built for regular, non-tech-savvy people. To such people, it'd be a bad experience if suddenly they could no longer open FB links sent to them by their friends (because links to FB posts can have non-public visibility).
Personally I've trained the habit of opening most random links in private browsing mode. Besides achieving what you mention it seems like good, additional mitigation against tracking/security/etc. The only downside is lack of browsing history but I've never had much use for browsing history (mostly because I've found the search functionality in browsers to be useless).
I think your observation is a good one, and a rule for it should be added (or actually the third rule should be modified).
Firefox Multi-Account Containers is an extension  that allows you to compartmentalise cookies and other data into different containers. It's possibly best explained by thinking about how you might use it:
Suppose you have two Twitter accounts: a personal account and a business account. Twitter usually only allows you to be logged into a single account at a time. This means that if you're currently logged into your personal account, you need to log out first before you can use your business account.
With Containers, you can simply create a Business container and log into your business Twitter account in there. That way, you can be logged into two Twitter accounts at the same time.
Containers do not act as ad blockers and perform a very different function.
For more information on Containers, I would suggest reading the extension's description  and/or the support page .
This article is talking about a special version of Firefox Multi-Account Containers called Facebook Container . This works similarly to Firefox Multi-Account Containers, but it isolates Facebook to its own dedicated container.
Firefox Containers don't block ads, trackers, or anything. Instead, they isolate websites into their own "containers". Think of it like private browsing mode. Except each container is its own, separate private browser, and they persist.
They're different approaches with pros and cons, but they can certainly be used together since they're orthogonal.
Personally I use both. uBlock blocks ads/trackers/etc for me, while I use Containers as additional protection for not just social media sites but also to isolate my banking activity, work accounts, etc. It's useful for when you have multiple logins to the same site, and for mitigating some attacks (e.g. CSRF). [NOTE: I'm using the full featured Multi-Accounts Containers add-on, not the Facebook Container add-on mentioned in the article]
That's what they advertise, but it's not even remotly true. I've made a bit of an exercice to see how they are working. Basically they all (adblocker and tracker removal) have a database of bad guys and avoid the known bad guys to load. The problem is all the unknown bad guy. We would need something that is behavior based, not database based, doing my research I couldn't find one that was working as one would expect.
Just as 1 example of the bad guys: Cloudflare that send cookies when the owner of the site is using their CDN regardless if you have setup a do not track header. Do any of those track blocker managed to block Cloudflare? Nope
It makes me sad that our community for some reason I ignore don't even respect the do not track header. It literally is just decoration
The basic rule is that if more than one first-party site tries to connect to the same third-party, that third-party could be tracking you and will be blocked. But it's a little cleverer than that, and has etra rules to just block cookies from common CDNs.
False. I just visited cloudflare.com and uMatrix blocked 2 Cloudflare cookies, 8 Cloudflare scripts, one tracking pixel each from Bing and Google Ads (who are on the "bad guys" list), a script from Optimizely and an embedded frame from Google Tag Manager. That's with a whitelist that only allows CSS and images (the default, I think), and only from first-party sources.
Surprisingly, the site wasn't even broken.
The Facebook Container and Multi-account Containers addons are great, and I would highly recommend their use. I've been using Multi-account Containers since they were first released. However they do not "stop Instagram and Facebook from tracking you online". They significantly reduce the extent to which they can track you, but are a long long way from complete prevention.
I mean, the container concept can be universally applied to any website.
The user 283894 makes a point that sometimes, it's nice to have your logged in Facebook container separate from other Facebook content.
There is also a special Google Container which works similarly.
The extension also cleans up Facebook cookies from your other containers. This extension is what got me using containers, but I deleted it after I realized it doesn't do anything after the initial installation.
Edit: Apparently this extension also prevents you from opening other sites in the Facebook container.
I mostly use it to manage the 3 google accounts i use daily
No, I said "remove".
I installed it and even on a 1-year-old Macbook Pro, holy cow it's acting like I'm doing machine learning training -- the computer fan goes into overdrive with every little thing I do in this browser!
This is on a raw install with no browser extensions other than this FB container mentioned in the article (which comes with the installation).
It appears this is a many-months-long known issue about the new Firefox. Disappointing.
Back to Chrome....
At the end of the day whatever we do, a browser is still a browser, beeing safe from trackers would mean:
1. trick the browser fingerprinting techniques to generate different crap on every run
2. lobby browser vendors (including Firefox) not to reveal information about your laptop (cpu, screen resolution, ...). I'm trying to find a legit use case to know about how many core you have available when js can only use 1 or information on your screen when the only thing that matter is already inside the window object.
3. dynamically update your user agent (if you purelly get rid of it, services like google map won't work)
4. hiding your ip behind a VPN/proxy or rotating your ip
and probably a lot of other things I have no idea about. I don't want to be the devil lawyer but as for now, Firefox isn't doing a good job at what they claim doing. Just proposing a tinny thing they haven't done: ask for user permission when a website try to access hardware related information
In my Firefox I have an option in Preferences :
Accept third-party cookies and site data: Always/From visited/Never
Is this the one you are talking about in Chrome? Because in that case it very much exists in Firefox as well.
I guess you got lucky.
You can compare them with numbers. Open something like youtube vid, or twitch stream, for example, whatch your cpu
Still fails to play twitch streams or youtube without eating the hell out of your CPU and embeded youtube videos still lag in many cases (when you hover over the player's control panel for example). Nothing critical, but those things are still there and in their bug reports.
Officially it was because of compatibility problems but I've never faced problem when spoofing Chrome UA on Google Maps so Idk what to think about this excuse.
But luckily this attitude might be poised to change, because the employees at Google who still care about these things have been pushing hard, and it's now threatening to become a PR issue that Google is needlessly serving inferior versions of big apps to other browsers. When other browsers are a UA spoof away from showing that you're artificially screwing with them on your base Search engine, it becomes harder to push those browsers to adopt the web standards you're trying to push to make YouTube "better".