Hacker News new | comments | show | ask | jobs | submit login
First-party isolation in Firefox: what breaks if you enable it? (ctrl.blog)
347 points by fanf2 44 days ago | hide | past | web | favorite | 114 comments



I've been running at home and at work with first-party isolation enabled for a few months now. Google login works fine for me, as does login with Google. It's broken a few internal tools, especially when people do things like hotlink across internal systems (which would have been broken at least some of the time for most people anyway, until they realise that to view _this_ page properly they need to log in over _there_). Also breaks PlayStation Network.

All in all, insufficiently broken for me to be bothered enough to turn it back off :). I do wish there was a "view from alternate origin" feature though, to let me load a site as if it were loaded in an iFrame -- that would let me work around the issues with my internal sites.


I’d appreciate a ‘New Temporary Non-Isolated Window’ for use with websites that don’t work with FPI.



Introducing another container doesn’t address compatibility problems for websites that absolutely must communicate cross-origin. You can’t disable FPI in just one window, tab, or container.


The best strategy is to not use Google however. It's not like you need to in 2018 anymore. There are better services for almost anything out there, although you may have to pay a few dollars for some of them.

Well worth it since they are superior to Google. For email, Fastmail is king.


I use “alternative search engines” daily. However, I have to crawl back to Google if I want to find things that were published in the last two weeks. Even Microsoft Bing can’t keep up with all the content that appears on the web every day.


If its just search engines, you dont exactly have to be logged in to use Google.

Also, there are Google search-proxies like startpage.com


Startpage and the like don’t get everything you find on Google. E.g. you can find today’s articles from even the most obscure blogs on Google. Searching for the same article won’t return that result on Startpage until at least a few weeks from now. I’m not sure if this is Google holding back on the good stuff, or if it’s Startpage not being willing to pay a higher fee.



HTTP-only? Nope, nope, nope.


site:news.ycombinator.com About 91,000 results (0.38 seconds)

site:news.ycombinator.com in google.com About 11,90,000 results (0.25 seconds)


And how many of the initial results in google will be filtered out (e.g takedown notices) when you hit the very last page? I'm doubtful about the usefulness of the results count in comparisons like this.


Right? I tried so hard to use duck duck go and bing, but at some point DDG seemed to start producing very bing-like results, and bing is just not as good as google for technical searches - apparently by design, they believe their results are better for general users.

Kind of frustrating :-/


DDG is powered by Bing and Yandex. DDG doesn't have their own index.


I run my own mailserver and use DDG for all my searching.

And use GMail as an MUA and am the GSuite administrator for the school I support.

Using Google is a trade-off, and there are definitely services they provide where the trade-off is worth it for me. Your mileage appears to vary. First-party isolation definitely kicks the needle across towards Google only having data I'm happy for them to have.


For many people (like me) this is not advice that can followed since many many companies use G Suite.


Use Google in a temporary container (Firefox plugin) and isolate Google completely if you need to use them.

Also you can use startpage.com for searches, it uses Google behind the scenes. Looks nicer too. :)


I've been doing this the hard way for years -- running four browsers at all times, each for different things. Chrome is logged into Google, Firefox is logged into Facebook, Safari is for HN/Reddit, and Chrome canary is for other random sites that I don't want to have already logged in, like when I use the AWS console. And then I also use incognito windows for going to forums and deal sites and all those sites known for having 25 tracking bugs.

Overall it's not too bad, but there are definitely annoyances around not being logged into Google everywhere for example.


It's less drastic than using different browsers, but far easier to use, so depending on how paranoid you are you might be interested in this: https://addons.mozilla.org/en-US/firefox/addon/multi-account...


have you considered using multiple profiles/containers in one browser e.g. Firefox supports containers, and Chrome supports profiles?


Firefox actually supports both. It already supported profiles way before Chrome existed, and recently got support of containers.


I do something similar. For example, for Facebook I create a profile (using Firefox's profile manager which I access as `$ firefox --ProfileManager` from the command line, but I'm sure there's a simpler way) and I just call it "Facebook". Then I have a script called e.g. `firefox_facebook` like this:

    #!/bin/bash

    nohup /usr/lib/firefox-esr/firefox-esr \
        --no-remote \
        -P Facebook \
        1>/dev/null 2>&1 \
        &
Then I put a file like this in `/home/user/.local/share/applications/facebook.desktop`:

    [Desktop Entry]
    Type=Application
    Name=Facebook
    Icon=/home/user/.local/share/applications/facebook.ico
    Exec=/home/user/bin/facebook_facebook
I also even have an icon in there. The result is that I have a single Firefox profile devoted to facebook and it comes with an icon in my start menu with a facebook image (it's just one of their blue F facebook logos) which is totally isolated from everything else.

I also have a similar version which just copies an empty profile to a random folder in /tmp and then uses that freshly separated from everything else.

This seems like a complicated process, but it's trivial to add more in later and I can basically have as many as I want. I did it as a bit of an experiment to see if something like this feasible (not technically, but more socially). I.e. will I get lazy and stop using it soon. So far it's pretty easy and is a nice way to take webapps and basically devote a firefox profile to it in a way that makes it seem almost like an electron app, but without all the extra useless stuff.

Tldr: I do something similar except I only use Firefox and it's profiles.

edit: Also should say I run debian with cinammon desktop so this probably won't work for most people here, but something similar is probably possible on every system.


I do the same thing, I just use different Chrome profiles (don't want to multiply my attack surface by using different browsers).


Ive been using it since Firefox 58, where they fixed a bug that broke cookie-whitelisting.

Ive been pretty happy. The only website where it really is a problem is Playstation Network, but I have an addon that disables FPI when I really need to temporarily.


How can you tell it works? I've been trying now 5 times to enable it, and testing if it works. If I understand correctly, if I log in to gmail.com (mail.google.com), google.com should be logged in, but google.dk and youtube.com shouldn't since First-Party Isolation should be isolating them, but no matter how hard I try, it doesn't work. If I log in to mail.google.com, I get logged into youtube.com, google.com and google.dk.

Am I misunderstanding how it is supposed to work?

I've tried completely uninstalling firefox 5 times now - including wiping the profile from my machine - but the same thing keeps happening.


Cooperating websites can subvert first-party isolation by redirecting the top level page through multiple first-party domains (with an ID in the URL). And Google does exactly that when you login. How to properly prevent it is still an open question:

https://bugzilla.mozilla.org/show_bug.cgi?id=1319839


I like most of what you said and agree with the cause.

But why does your own blog load up with an apparently-unironic call to be whitelisted in Adblock?

If you don't want to be tracked by other people's ads, why are you helping track people with ads on your own site?

The page also loads Google Analytics.


Effort have been made at keeping the impact of ads and tracking low: https://www.ctrl.blog/about/privacy-policy#privacy-policy-ad...

The ads help fund writing and research into technologies that restrict ads without blocking them outright. You choosing to block ads is your choice, but it takes away incentives for researching and writing about the topics you care to read about. You can sign-up for Flattr if you prefer not to see ads and still support writers. https://flattr.com/contributors


Advertise natively, don't track. It's that simple, no need for research. Our ancestors did it for hundreds of years. There is no acceptable level of tracking.


Native advertising trying to pretend its journalism by association is a much bigger problem than display advertising. Websites informing you about “great deals” with the voice normally reserved for actual journalistic endeavors (or even opinion pieces) in between their normal articles isn’t in any way desirable. Even big publishers don’t come clean about when articles are ads or not.

E.g. https://www.ctrl.blog/entry/pcmag-vpn-review


That's completely orthogonal to what is being suggested.


"Native advertising" has a very specific definition; GP assumed that standard usage. If GGP was intending to refer to advertising without tracking, that would be a non-standard definition.

https://en.m.wikipedia.org/wiki/Native_advertising


> There is no acceptable level of tracking.

On what grounds do you make this sweeping absolute statement? I'm personally willing to accept lots of tracking by Google, Facebook, etc. in exchange for free or cheaper services.


It's fine that you are, but some of us are not. I'm fine seeing plain ads, just not traking ads. If the website owner doen't want to show plain ads then that's their choice.

(note, I don't use an ad blocker, just a tracking blocker)


[flagged]


Personal attacks will get you banned here. Please don't post like this again.

https://news.ycombinator.com/newsguidelines.html


Why do you think that was a personal attack? Simply described a perspective calmly. Perhaps your mood has colored the reading.


I read it as you calling the other user a bootlicker.


It's a class of folks with a certain perspective. Whether a person chooses to join it at a particular time is their business.

It is the difference between saying you ARE bad, versus you SAID/DID a bad thing. I didn't say the person was one on purpose, perhaps wording not strong enough.


I'm not sure what you're trying to argue here, but you can't call people names like bootlicker on HN. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html

If you'd please err on the side of being respectful in the future, we'd be grateful.


That’s genius! Using trackers to research not using trackers.. so why don’t you use more privacy oriented analytics instead of google analytics? Matomo (formerly known as Piwik) seems like a better choice, unless that option is exactly what you’re researching.

I don’t mean to be so snarky. Your comment seems so hypocritical to me, though.


I’ve been contributing to the privacy-first design of Fathom. https://usefathom.com/ I don’t use if myself as its not all that useful at this time. There is actually a second tracking system in place that records things like the number of people who block ads and analytics.

Google Analytics (GA) on the site is configured to delete data as early as possible and it’s configured to not store IPs or link activities. However, its the only way to get any page-level reporting (like which pages are profitable) out of AdSense.


It’s cute that you think asking a hostile actor to be nice and delete data early and not store some of it would actually work when their bottom line directly depends on them doing so.

They can very well pretend not to do it while secretly still tracking & storing everything.


Matomo has completely unacceptable impact on client-side page loading performance. The way it stores data also makes it difficult to do anything useful with the data.


Just FYI: Matomo has nearly none impact to the loading performance as it is always loaded async and deferred (so only after the page has finished loading).

If this is still too slow for you, you could try the official QueuedTracking plugin [1], which tracks into a redis or mysql database and processes the requests afterwards. That way you should get to about 30ms for the request.

[1] https://plugins.matomo.org/QueuedTracking


ISTR that Matomo also has a log-analysis mode where you don't insert anything into your content, and therefore does not affect performance at all.


Ah. Log analysis hasn’t been useful for years. Browsers pre-fetch and pre-render documents that are never viewed by anyone. You need client-side scripting to know whether a request is for a human or just browsers trying to preemptively load things in expectation of user interaction.


I’m curious - could you explain that impact? I use Matomo and had not noticed it.


Lets just say their best-practices was last updated sometime in the early 2000s. It’s 2018 and their client-side tracker is synchronously executed and holds up the entire page.


This is just plain wrong. Matomo is async by default[0].

I appreciate the difficulty of considering how to monetize content that you put a lot of effort in to, but spreading misinformation on the very same topics you are writing about is harmful and short-sighted. Your responses here moved me from “this seems like an interesting blog to follow” to “nope”.

[0] https://developer.matomo.org/guides/tracking-javascript-guid...


This seems to contradict Findus23 claim that it loads async. Regardless, asnyc="async" can be manually added to script tags.


I don't use an adblocker -- I just use Firefox with tracking protection on. I'm okay with ads, just not track-y heavy ads, and Tracking Protection does the trick.

You may want to look into why Tracking Protection doesn't like your ads.


It’s because tracking protection blocks ads based on domains. It doesn’t recognize that AdSense can serve non-personalized ads that don’t track you. Privacy Badger has the same design limitation.

Firefox and Privacy Badger should want to promote the use of non-personalized non-tracking ads, but allowing well-behaved ads isn’t really a priority.


Why do you believe that Google aren't tracking people? If they have the technological capability to do so they almost certainly do, even if they happen to serve non-personalised ads while doing it.


Because technology alone can’t protect your privacy. You need to trust people. Google says they don’t track ads when configured to not-track. They provide technical details on what this means. It’s designed around the General Data Protection Regulation (GDPR). At some point there has to be trust. I trust that Google won’t risk millions of Euros in fines over lying about not tracking people.


> Because technology alone can’t protect your privacy.

Technology alone actually does allow us to choose between "send data to Google and trust that they won't do anything bad" vs. "don't send data to Google and know that they won't do anything bad".

EDIT: I appreciate that you care about privacy. You care about it a lot more than most websites seem to, so it does seem unfair that you're getting more flak in this thread than most websites do even though they run more trackers than you do.

I notice this pattern a lot, in myself and others. When there's a choice between a solution that solves no problems, and a solution that tries to solve the problems but only manages half of them, the latter solution tends to get criticised for the half of the problems it doesn't solve, and the first solution doesn't get criticised at all.

Don't really know what I'm trying to say. Just that the criticism you're getting in this thread (some of it from me) isn't entirely justified, and I'm glad you care about privacy, even if you don't go about it the same way I do.


> I notice this pattern a lot, in myself and others. When there's a choice between a solution that solves no problems, and a solution that tries to solve the problems but only manages half of them, the latter solution tends to get criticised for the half of the problems it doesn't solve, and the first solution doesn't get criticised at all.

It's called "Copenhagen Interpretation of Ethics", and it's a problem.

https://blog.jaibot.com/the-copenhagen-interpretation-of-eth...

TL;DR: "The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster."


> the [partial] solution tends to get criticised for the half of the problems it doesn't solve, and the [ineffective] solution doesn't get criticised at all.

It comes out as criticism, and I agree you're right to critique it, but I think it's intended to be persuasion. People completely write off the "solution that solves no problems" and just look for a way to bypass it. If they see a half-solution, they see potential for improvement, and naturally try to realize the potential. But most people aren't very good at being really persuasive; they just see their own point of view and become critical/forceful.


Sure. And if you block all ads and form of measurements, people will stop creating content you like. Then Google won’t know anything about the lack of content in your fields of interest. Great solution.

Just look at the market for “Linux news” websites. Linux usage is up across the board (including desktop installations), but the target audience are all blocking ads so there are barely any websites left that cater to people who’re interested in that content category.

People don’t like paying for content. There is few feasible micro-payment schemes around (because of the high payment service fees) and the few that try have few users. Ads enable free content to be produced in great quantities on the most niche subjects imaginable. Unless you have a great funding model that can replace ads, we have to find ways to limit what tracking ads can do (FPI) without outright blocking them.

Adblockers don’t just block ads. It blocks funding to creative efforts.


> Unless you have a great funding model that can replace ads

This is what we're trying to do at Snowdrift.coop (I am a volunteer). There's already a good introduction at https://wiki.snowdrift.coop, so I'm just going to link there and quote the beginning:

> Snowdrift.coop [is] a non-profit cooperative platform for funding freely-licensed works everyone can use and share without limitations.

> Our core feature is a new fundraising approach we call crowdmatching. Patrons donate together by all agreeing to match one another instead of donating unilaterally.

We also have a new 1 minute intro video[1], if you'd prefer that format. Note: the low streaming quality is a quirk of archive.org; you can get higher by downloading the webm.

[1]: https://archive.org/details/snowdrift-dot-coop-intro


In an alternate world where the advertising industry is respectful I would agree.

In our current world the advertising industry has demonstrated their inability to self-regulate, and even Google isn’t any better (see the recent issues regarding Android tracking locations even when the option was set to off).

GDPR is irrelevant in that case. If the tracking happens in such a way that it’s impossible to tell from outside Google then nobody will have grounds to sue.


Google has already been fined billions of euros by the EU, and is currently being investigated by the EU for lying about not tracking people.


> Because technology alone can’t protect your privacy.

Hell, it's the only thing that can.


>Effort have been made at keeping the impact of ads and tracking low: https://www.ctrl.blog/about/privacy-policy#privacy-policy-ad....

FWIW, that's what every website will say. I don't have any reason to trust your site any more than I trust any other site. I don't want to be tracked by anybody, regardless of their intentions.

> The ads help fund writing and research into technologies that restrict ads without blocking them outright. You choosing to block ads is your choice, but it takes away incentives for researching and writing about the topics you care to read about. You can sign-up for Flattr if you prefer not to see ads and still support writers.

I'm sorry, but merely putting something online doesn't entitle you to make money from it. Posting an article or tweet or picture or other content on the public internet is an inherently non-profit activity, and it always has been. It doesn't entitle you to follow me around the internet.

If you want to make money posting online use a paywall.


Is there a test I can use to confirm that it's working? I've set privacy.firstparty.isolate true and privacy.firstparty.isolate.restrict_opener_access true and when I log in to Github followed by Travis, Travis was able to log in without prompting for a password....

Firefox 62 macOS.

Edit: I did lose all my cookies on restart, so I do believe the option is at least enabled. Still would like to test that it's actually doing something.


Go to https://ritter.vg/misc/ff/fpi.html On first load it should say "There was nothing in local storage."

Now go to https://rittervg.com/misc/ff/fpi.html On first load it should say the same. If it says the same timestamp that was stored on the first page - it's not working.

Source: I'm a Mozilla Developer who is one of the primary devs/supporters of First Party Isolation.


What if the box is empty? JS is allowed. (Edit: I guess the culprit is "third party cookies blocked by default")

So wouldn't a better test be about a third party that was used in a first party context before? Since FPI goes beyond third party cookies.


Thanks for diagnosing that for me, you're right blocking third party cookies does cause it to fail.

Both tests are equally valid. I just gave one because trying to be exhaustive about testing it would be mind-numbing. The test I provded only does localstorage, but FPI also isolates DNS cache, H2, image cache, favicons, cookies, localstorage, indexdb, etc etc

You can do yours by visiting https://anonymity.is/misc/ff/fpi-iframe.html first; then visit the ritter.vg and rittervg.com links.


Thanks for the clarification.

What surprises me the most is that not only Firefox but also my Safari Browser passes all those tests when ITP is enabled.


Safari by default has a stricter storage access policy by default for all third-party domains, which requires you to visit the domain as a first party first. So it's probably that rather than ITP.


I have a general question if you don't mind. I use Firefox Beta. Why is Firefox going the route of a manual blacklist (disconnect) instead of working on some kind of programmatic machine-learning/somewhat intelligent third-party storage blocking by default that doesn't discriminate known against unnkwon trackers?


Seems to be working, thanks! (had to disable blocking of third-party trackers for it to function, but after that, it works as promised, and I have re-enabled blocking of third-party trackers)


If Travis redirects to github, I assume github would get access to it’s own cookies again, and then be able to perform oauth, after which it redirects to Travis again with a token in the URL, no cookies or local storage needed as far as I’m aware.


Did you restart between changing the settings and doing your tests? If not, they’re invalid, and you should repeat them.


I use it. Over the last year or two this feature, in addition to a renewed effort towards privacy on my part has led me to simple not use websites which will not work with my privacy settings.

My only real hold outs in the "decidedly not private wise" camp are my gmail account used for various emails I still wish to receive but don't wish to give my email too and my old nick (this one here).


That's a great feature. I just turned it on and logged into everything I care about, and it all worked. I already had so much ad and tracker blocking that it didn't create any new problems.


I guess trackers will just ask websites to route analytics traffic through their own infrastructure. Would it be enough for example.com to setup a dns alias pointing tracking.example.com to tracking.com?


That breaks cross-domain tracking, though; before, cookies set on analytics.com (pulled on SiteA) would be sent back to them when pulled from SiteB. If you now use different domains on each site, that doesn't work.

(To be clear, I think this is a good thing)


Except all the analytics company has to do is have another shred of evidence that your identy is linked and it can just give out tokens that represent opauqe blobs you take care of and index by it's token like PHP does session storage.


This would be already a giant leap forward: Usually, there is not a lot of trust between actors, and there doesn't need to be: everyone gets their own tracking pixel and is happy, who cares if user data is leaked along the way. The setup you describe requires more trust beetween parties, and, as a side-effect, reduces the overall number of potential contractual partners which get access to user data. It also reduces the number of websites with the knowledge to implement such a setup.


Right. It's historically been very assymetric - the content sites are risking their user's safety by working with shady as providers, but not their own.

Under a server-side include, the content company is just as at risk from malicious adscript as the client.


This is a possible work-around, but it adds a lot of complexity to get HTTPS and stuff worked-around. Either the third-party must handle HTTPS for their partners who setup CNAMEs, or the first-party must handle HTTPS and proxy the requests back to the third-party. It’s doable but it will significantly slow things down to the point where even shitty websites would consider it unacceptable.


The third-party can handle HTTPS just for that particular subdomain, using a separate certificate.


Yes, but this would significantly increase complexity.


It it really that bad, with automated certificate issuing via Let's Encrypt?


Security is always a blessing (it keeps your stuff secure) and a curse - people are lazy and don't want to use it because it generally causes pain points. Remembering to bring your keys, remembering increasingly-complex passwords and PINs, remembering to lock your doors, click this security warning, check that checkmark box. Security is a pain. But it's also a necessity. I like the idea of more isolated sandboxes, reducing third-party tracking cookies, third-party content. I go to my bank's web-site, why do I want to grab information from outside of my bank? Anyway...it's good to see Firefox is trying something new. It'll be interesting to see how well it works in the wild.


This is great news, especially in stark constrast to other articles in the front page of HN right now.


Interesting. Was planning to migrate over to FF anyway


What about just using uMatrix (or similar extensions)? You have more precise control over what gets allowed and what not and you can just temporarily or partially disable protection for logins/payments etc.


It's not the same thing at all. See this discussion to learn the differences of uMatrix and FPI. https://www.reddit.com/r/javascript/comments/9edeqe/firstpar...


> I’m not sure whether that is because Mozilla consider it unsafe, unpractical, or don’t want to commit to maintain the feature in future releases.

I imagine it was implemented for the container tabs.


The main problem I see with account containers — still — is that you can't say "this container can only have certain websites in it". For example, if you put reddit in a "social" container, but click on links to the stories, then you have all of your cookies and stuff polluting the social container.


The Facebook Container Extension does this. The maintainers hard code the list, but it really makes me think that someone can make an extension to let a user manage a list themselves. As an early adopter of Facebook Container, I had to live through when I couldn't log into messenger.com (the only part of facebook I really used). For that specific project, they maintainers wanted to control the list of domains as opposed to allowing users to maintain their own list and "dilute" the effectiveness of the container for "everyone else". Ie, if something was missing, they wanted to fix it upstream and push it down.

I would like to see an extension like this for google. I do use google and I have set the various google apps I use to only open in the google container. But the main google.com/search domain does open in any container. I have to be careful if google prompts me to login to not login.

https://github.com/mozilla/contain-facebook/issues/45#issuec...

https://github.com/mozilla/contain-facebook/blob/1e37bc677ac...


I feel like the cause is most likely that there are just too many darn websites for a user to be willing to specify them all. Solving this would seem to require some global database of "all sites run by company X" that undergoes constant maintenance.


How far would the WHOIS records go toward providing that info?


Not sure... sounds to me like you'd at least need a cache of all WHOIS records in the world to be able to invert the mappings from an org to its domains.


The Temporary Containers plugin is very nice. It can be configured to keep all tabs completely isolated, as if they were their own browser.

And you can set it to isolate subdomains or not, which is very useful when you need multiple tabs and want the same session in them.


FPI was implemented (and enabled by default) in the Tor project and ported upstream back into Firefox. Container tabs is a side-product from that effort.


If you enable FPI, is there any benefit to also using container tabs? That is, assuming that you aren't using containers to allow you to log in to the same website with different accounts, but rather to keep data from different websites separate?


FPI is intended to stop tracking between domains. Containers are designed for the other use case you just described. You can use them both at the same time if you want to achieve both.


Not quite, it's a different axis. Security tokens in firefox contain the domain of the actual request/resource they are related to (e.g. the domain of an iframe) and a bag of data called originAttributes. Containers are one value in that bag. The top level window domain (1st-party) is added as another value if you enable FPI.



It could be off by default because, in Tor-strict form as imported, it breaks the web for too many users to make the value it provides worth a default-on. That’s a common consideration for preffed-off features in many browsers.


The biggest problem is SSO and similar - especially across organizations that have multiple domains (think the various google properties for instance). Part of how they work is the login process cycling through multiple domains when you click the login button. Dealing with that, and making everything work without also making the same technique work for tracking is ... challenging.


every SSO is different. But mine all worked.


Another plan is to use self-destructing cookies. You can whitelist some domains then, and break much less


You can still be tracked through cache tokens, DNS cache-probing, and methods other than cookies. FPI is so much more than just keeping cookies under control.


please s/quite/quiet


So after enabling it, is there some easy way to see if it actually works?


Go into your FF profile, open SiteSecurityServiceState.txt - it will show every HSTS entry separated into firstparty domains.

Similar, in the storage folder in your FF profile you can see that every firstparty website has it's own folder and third party cookies are places inside that folder and can not share data with other folders.


Not super reliably. But try this one: https://github.com/mozfreddyb/test-firstpartyisolation


I won't copy paste my reply from above, but rather link it: https://news.ycombinator.com/item?id=17949613


How is this different than using uMatrix / uBlock origin?



TLDR: in my experience: very little.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: