Every time there's something about online privacy with browsers, it's mostly Firefox or Safari. I wondered if Chrome had resisting fingerprinting on its radar (guessing that it wouldn't be in Google's interests to add any feature that would thwart profiling users online), and I found this [1] confirming my guess (emphasis mine):
> Since we don't believe it's feasible to provide some mode of Chrome that can truly prevent passive fingerprinting, we will mark all related bugs and feature requests as WontFix.
I haven't read all the analyses in the links in that article, but this sounds defeatist and lazy, much unlike a stance that Chromium would take on security or performance on the web.
Contrast the above with what this article says about Firefox:
> Firefox's upcoming letterboxing feature is part of a larger project that started in 2016, called Tor Uplift.
> Part of Tor Uplift, Mozilla developers have been slowly porting privacy-hardening features developed originally for the Tor Browser and integrating them into Firefox.
If you value online privacy, your best choice is Firefox (though it requires some additional manual configuration). Safari comes second (its extensions directory could use more love). The choice where you can add more of your influence to is Firefox — by using it, evangelizing it and by donating (if feasible) to it.
It's ironic that in a time when the browser used matters less than ever thanks to good HTML5 standard support and differences lie more in the fringe, most recent corners of the living standard, the user agent strings are more detailed than ever.
It would help if the browser vendors, particularly Google, took a step back and spent some time thinking about why user agent strings were invented. They're more like cludges to know how to present a web page, which are less needed nowadays because you can request things like mobile device dimensions, provide HiDPI resources that are only used in case they are needed, provide entirely different views depending on mobile or web, etc. All without peeking at that ugly string. Beyond that, we have polyfills and frameworks that guarantee cross-browser compatibility and minimum supported versions, again without resorting to peeking at browser engine build numbers or worse because the detections are now largely integrated in the standards themselves.
Actually browser compatibility is not getting better, it is getting worse.
I was working for a firm last year that made a system with a browser front end that only supported Chrome and Safari, not Edge or Firefox -- this is happening everywhere, and it is why MS threw in the towel with Edge.
To be honest most people are confused about "private mode", I agree there should be privacy options enabled by default with it, but the reality is pretty much "no browser history will be stored (locally) and your session / cookies will be isolated between private mode and "normal mode"
Indeed. It's intended as a method of watching porn without having it pop up in the address autocomplete later when your kids are trying to go to the Peppa Pig website.
Google still knows my precise location sometimes even when I'm connected to a VPN.
The browser itself uses more (Wifi network, phones linked to your account) than just your IP to determine it, and there's no way to control it.
> guessing that it wouldn't be in Google's interests to add any feature that would thwart profiling users online
I would actually think the opposite. Wouldn't it be better because then only Google would have that information? Only Google would be able to fingerprint. This is of course under the assumption (which is currently accurate) that Google has the majority share of browsers. But maybe it wouldn't be, because it would teach others how to thwart their fingerprinting.
I suspect that's the biggest reason Google was so interested in "https everywhere". That removed detailed browsing visibility from a lot of entities, but not Google.
I've seen this tendency on HN - if this person/entity does something, I suspect it must be bad or selfish.
On the subject of HTTPS - do you think the interests of ordinary consumers are in any way served by continuing on HTTP? Countless websites, even those accepting login credentials used to think that it was acceptable to not take the trouble to set up HTTPS. The only thing the operators of these websites cared about was being marked "insecure" by the most popular browser. The shift to HTTPS was a definite win for privacy for every person who uses the web.
But no. Apparently, the "biggest reason" for the people working at Google pushing this because it was good for Google. They didn't care about all the benefits for end users, they only cared about themselves.
It's sad that slander like this can become mainstream view in a forum like HN.
Disclaimer - no connection to Google in any way. Don't even own Google stock.
"I've seen this tendency on HN - if this person/entity does something, I suspect it must be bad or selfish."
Companies tend to do things that advance their competitive advantage.
"slander" - really? For "I suspect", followed by a theory that an online ad company might bandwagon something (otherwise good) that helps them for less than altruistic reasons?
Google has long held the stance that anything that's good for the web is good for Google. Google is nearly ubiquitous online these days, so people spending more time online will almost certainly benefit their business.
That's why they don't need any ulterior motivation for efforts like HTTPS Everywhere, Google Fiber, or Chrome. The very fact that those technologies make the web safer, faster, or more powerful already benefits Google indirectly.
> Google has long held the stance that anything that's good for the web is good for Google.
Not denying that, but this could be reversed and would still make sense: Google has long held the stance that anything that's good for Google is good for the web.
A corporation can set any goal the founders please. Making money need not be the dominant one. Obviously making enough to stay solvent helps with longevity.
Have you worked in one (or more than one) and observed it's high-level decision making? It's about money and nothing else. This is not unique or special and the only way you'll ever truly believe it is any different is if you make the grandly naive mistake of listening to the marketing department (either internal or external).
Alphabet is a publicly traded company and the officers have a fiduciary duty to do what’s best for their shareholders. This normally means maximizing profits. A corporate officer could make a case to shareholders that not maximizing profits in X area could have Y long term benefit, but that’s not very common these days.
Of course, the officers of public companies try to do what’s best for shareholders. But here’s a different take on it by Tim Cook five years ago [1] (interpret it however you’d like):
> "When we work on making our devices accessible by the blind," he said, "I don't consider the bloody ROI." He said that the same thing about environmental issues, worker safety, and other areas where Apple is a leader.
> …
> He didn't stop there, however, as he looked directly at the NCPPR representative and said, "If you want me to do things only for ROI reasons, you should get out of this stock."
Maximising shareholder value is very often not what's best for shareholders. The current popular myth that it is somehow mandatory has been coincident with a definite change for the worse.
Hasn't Amazon been notable for actively avoiding profit through much of its existence?
They reinvested their profits in themselves. That's not really avoiding profit (and I think there is a definition of profit that includes the company gaining value)
I think calling it the "biggest reason" is wrong but not necessarily entirely wrong. There's a difference between Google's reasons and your benefit.
It could simply mean the interests aligned. I get my credentials encrypted and Google gets more [insert stuff here]. When the interests don't align the result is a bit different. You could be sending tracking/location data every few minutes, adblockers could be crippled, you could be forced to sign in to sync Chrome data, etc. Usually the very same stuff Google relies on for revenue.
So what I'm saying is that it's not that outlandish to assume Google had more reasons than your benefit.
Sorry, but there's really no reason to force HTTPS on static webpages, news sites, or any site where you're just browsing/consuming.
Google is smart enough where they could've enforced the penalty on sites that should use HTTPS, such as those with any kind of form submission or login. But they chose not to without much of a reason, and that's why I can believe it helped some hidden motive of theirs.
They’re talking about Chrome’s (and other browsers’) marking of http sites as insecure, and Google-as-a-search-engine penalising non-HTTPS sites. These are good things.
There’s been a lot of press recently about shady data brokers selling information to Facebook. I would be genuinely surprised if those same data brokers don’t sell to google.
(In addition to full time data brokerage firms, there are third party phone app api’s that slurp data and feed it to advertisers. Facebook has at least one. Presumably google does too...)
There is a way to stop fingerprinting. That way is serving pages via distributed network (over a WoT or torrent-like thing).
All these other ways do is give people the illusion that they're safe from being tracked, when the reality is that they're tracked just the same, but by fewer people so the data is more valuable. This means that the money is centralizing around the actors with the most inexplicable methods of tracking; which are almost always the worst actors.
I hate it too, even though I'm not blameless. It's impossible to compete without a level playing field, and that playing field needs to be technically enforced, because otherwise we get region shopping and advertising / analytics models that push people to create intractable mechanisms so they can paper over how tracking fed into it.
For example, imagine a world where I'm bidding to show an ad to a visitor of nytimes.com. Now, I may not track the user, but if anyone is, they can incorporate what they know and sell that traffic back to me on a CPA model. All I see is the incoming traffic. I don't track anyone (wink, wink) but there is no difference.
In the long run this will either be solved one way or another, and all these online surveillance capitalism companies will crash and burn. Either we get a web with technical guarantees or we get a balkanized internet where every state makes their own weird laws about what is allowed or not.
Protocol wise, basic shared VPNs will stop most everything short of a semi-global passive adversary. The problem is running hostile code on your own machine, coupled with browser makers thinking it is a generally fantastic idea to allow that hostile code to access a whole slew of security-sensitive information.
Sure, VPN won't repudiate the region bullshit, and can even be outright blocked. But if adoption rose to the point websites didn't want to lose the traffic, those would diminish.
However, I do agree a non-immediate user-centric protocol is sorely needed though, especially to stop those global snooping adversaries.
That's my biggest fear. Yesterday I woke up to find out that Three (a big telco provider in the UK) had blocked Mullvad's API path as 'adult content'. I had to physically go to a store and verify that I was an adult to be able to use my VPN - the very same VPN I use because I have so little trust in the UK Gov (I'm not from here and am not staying long term, but hate that the society reminds me of 1984 regularly),
I was referring to blocking by websites, as there is actually some competition between them. For example, a few major ecommerce sites frustrate or outright block traffic from my VPS, so I've simply stopping visiting them rather than poking holes.
A similar dynamic is present for ISPs wherein if most people expect "Internet access" to mean something that works with VPNs, then ISPs can't block it [0]. But that's a harder state to reach as there's much less competition between ISPs. A commerce site wouldn't want to forgo 10% of their business, but a government/quasi-government has no problem demonizing 10% of their subjects.
Which is why we ultimately do need non-real-time protocols and namespaces for the bulk of communication.
[0] Nagging systems like the UK would still exist, but they couldn't progress further to outright banning it.
> The general idea is that "letterboxing" will mask the window's real dimensions by keeping the window width and height at multiples of 200px and 100px during the resize operation --generating the same window dimensions for all users-- and then adding a "gray space" at the top, bottom, left, or right of the current page.
> The advertising code, which listens to window resize events, then reads the generic dimensions, sends the data to its server, and only after does Firefox remove the "gray spaces" using a smooth animation a few milliseconds later.
Would using a setTimeout() on the window resize event bypass this? Send the data 20-50ms after resize is completed giving enough time for the letterboxing stuff to go away revealing the actual dimensions, or something? They say it only blocks the dimensions during the resize event and FF removes the letterboxing "a few ms later"
Presumably the implementation is smarter than being defeated by this easy trick, but I too wonder how it works.
> Finally, an extra zoom was applied to the viewport in fullscreen and maximized modes to use as much of the screen as possible and minimize the size of the empty margins. In that case, the window had a "letterbox" (margins at top and bottom only) or "pillbox" (margins at left and right only) appearance. window.devicePixelRatio was always spoofed to 1.0 even when device pixels != CSS pixels.
So presumably the window size is not being reset to real size - firefox just does a smart zoomin. In other words the fake size remains throughout entire session.
> Would using a setTimeout() on the window resize event bypass this? Send the data 20-50ms after resize is completed giving enough time for the letterboxing stuff to go away revealing the actual dimensions, or something? They say it only blocks the dimensions during the resize event and FF removes the letterboxing "a few ms later"
No, it will be a setTimeout on the document load event that will poll the window size every 100ms from here till the page is evicted by a close or navigation event, increasing the detrimental effect of adtech.
There’s like a billion side channels to determine how big the screen is unless you just want to entirely break basic css. Which is a pretty unreasonable way to address this problem.
Surprisingly, most sites are perfectly usable with CSS disabled. They end up looking a bit like "motherfucking website"[1], or what you see in a text-based web browser.
Wouldn't loading all external links right away (think background-image) solve this? How does the site exfiltrate the gathered information without javascript or tracking pixels?
Edit: Having a bunch of html buttons/links, showing a different to agents based on their resolution and waiting to see which ones they follow would break this, unless everyone crawls a lot of stuff they don't need. Pretending to be one of a few common sizes is probably a better solution.
How? Admittedly my knowledge of CSS is dated, but without scripting enabled you can't set cookies, make auto server request, or even auto set an external CSS file (that could be served and counted)..
Its not something I've considered before and i am genuinely curious how this would work.
You can make server requests by loading images and fonts. Browsers only load those resources they actually need, so there's lots of opportunities for conditionally triggering requests. Media queries for window size, fallback fonts to check installed fonts, css feature checks to make guesses at the browser type, ...
> Apps need it to determine where to place elements.
Could they hide the actual window dimensions from website javascript by only allowing a special kind of sandboxed function to access it? The website's code only really needs to do arithmetic on those values, so the browser could deny access to the actual values and force the code to manipulate them symbolically.
If I'm allowed to query the position and/or size of anything else in the DOM I can figure out window size by aligning elements at the edges or making one 100vw x 100vh and querying the position/size of those, so you really can't let me access the position or size of anything. I might have elements styled based on media queries, or old-fashioned DOM queries, so if I'm allowed to change how a button looks based on window size I can then check something about this element that isn't directly related to size or position. For example it doesn't make since to have a "download the app" button on desktop, but if you let me make it invisible then you can't let me query the visibility of it. This is true of all styling, if you let me derive it from vh/vw then you can never let me query it after that, which makes a lot of things tricky. Trading functionality that relies on DOM/media queries for privacy is totally valid, I'm just saying that it will make some non-obvious things impossible for a developer to do, and there are sites today that people enjoy using that will have their core functionality broken if this is the future. Browser-based CAD tools were recently discussed on HN, and those are right out. Really, I think the future is both, but I'm not quite sure how they'll coexist.
> Trading functionality that relies on DOM/media queries for privacy is totally valid
Perhaps it should be a site-specific permission like the microphone or camera. Your generic news site doesn't need that functionality (and shouldn't ask for the permission - you'd know something shady was going on) but your browser-based CAD tool would and you'd grant it there.
This will cause a permissions fatigue. Only the most sensitive things should have permission. The usage of these capabilities is large enough that it should not be behind a permission.
If we went down this path, I think that the any permissions dialog would come at the end of a very long PR campaign and feature ratcheting to get developers to update their sites to not need the permission unless absolutely necessary. Sort of like what's happened with the deprecation of Flash.
That part doesn't seem too unreasonable to me, but you could also just go with the largest available size and then scale it as necessary on the client.
The browser could pick a fake screen size, and behave in a way that is consistent with that fake screen size. This would probably break many sites, but it would mitigate fingerprinting if a common size was used.
I doubt that is avoidable, as the browser would still probably need to render at the false viewport dimensions. For a common adversary, fingerprinting based on timing would be more involved and less useful.
Even if they do, there's variation in what "full screen" means. Some people have the bookmarks toolbar enabled, others don't. Some have compact icons, others don't. Some keep a sidebar open, others don't. Some use a theme that changes the size of toolbars. Some have a dock/taskbar always visible, some have differently sized taskbars, etc.
This all leads to a huge variation between users of even the same screen size (e.g. 1920x1080), since the portion of the screen available to the page is different.
The Tor browser fixes this by having the window always be the same size on all machines, regardless of screen resolution. This is a bit annoying because it means you have less stuff on the page visible at a time, but since it makes you look the same as every other user, it's worth it for privacy conscious users.
Yes. I'm in adtech. 60% of browsers are mobile/tablet which are already fixed. The rest are almost always fullscreen. Maybe 2% have non-standard sizes.
Fixed except when the user enables android split-screen mode!
I believe split-screen mode implies the height of the browser window can change at runtime (in a JS visible way), but haven't looked at it recently.
I'm not sure what percent of people customize their dock height on macOS, but that setting uses a slider, which would cause a bunch of unique heights for a maximized browser.
The OS chrome between users varies a ton. Each taskbar, dock and titlebar can have their own size. In my case I'm using a window manager without decorations, so I don't even have a titlebar!
Huh, I thought the original was a sarcastic question. In that case, let me explain:
I keep a browser window open at all times. It is never full screen, because if it were full screen I wouldn't be able to see multiple windows at the same time.
I keep my browsing window as close to 1024x768 as possible. In 2019, a lot of websites can't handle a browser window using a mere 75% of the laptop screen, so they either render incorrectly or, worse, switch to a mobile view. When that happens, I either blacklist the website forever in a contemptuous fervor, or just resize the window. Apparently, this resizing action is trackable.
When I say "as close to 1024x768" as possible, I mean exactly 1024x768 unless I have resized it and forgotten. I use a little AppleScript thing to resize it to 1024x768, precisely for browser fingerprinting reasons. When you resize the window by hand, you typically end up with a VERY unique window dimension.
Even if you had 100 users with 1024x768 resolution for their screen they can be fingerprinted further because of small differences in the browser. Zoom setting, toolbar size, bookmarks button showing, full screen mode, small icons, additional toolbars, task bar auto hide, larger than standard taskbar all affect the viewable area of the browser and this is what the site operator or analytics will see.
It matters because if 99% of people have the same 5 configurations and only the outliers are identifiable, then this method would not be as valuable for spying as it is reported to be.
Would something like Perl's taint functionality work? I.e., all values derived from size, position, colour, pixel data, user agent, etc. are marked as tainted, and are stripped (or randomized or replaced with default values) from data that is sent over XMLHttpRequest and other communication methods. It's probably extremely hard to make that watertight though.
Even if it was implemented perfectly, you could work around that using timing side channels.
For example, multiply the value (e.g. window width) by some huge number, perform a slow operation in a loop that many times, and finally clear a flag. Meanwhile another thread is filling an array one by one until the flag gets cleared. The last non-tainted index in the array indicates your approximate window width.
>Apps need it to determine where to place elements.
This determination can't be done client-side? In other words, if I resize the window, it's going to send the new size to determine where to place the elements in the "new" area?
The W3C should probably create a new, rich spec hundreds of pages long so that frontend developers may instead declare images as a unitless set of point relationships to be rendered at any resolution without digital artifacts.
For example, instead of working on the pixel level, the developer would be free to simply declare, "an arc may exist in one of these four locations." Then, merely by declaring two further "flag" values, the developer can communicate to the renderer which three arcs not to draw, except for the edge case of no arc fitting the seven previously-declared constraints.
Just imagine-- instead of a big wasteful gif for something as simple as an arc animation, the developer would simply declare, "can someone just give me the javascript to convert from arc center to svg's endpoint syntax?" And someone on Stackoverflow would eventually declare the relevant javascript.
The browser can also ship with a pretrained GAN, so the site just asks for a picture of a cat and then the GAN creates one as needed, but nobody will know exactly which cat you saw.
You can't. Even if you block JavaScript, they can still get it from CSS media queries, where the can say which file to download given a certain screen size.
The best solution is to use the same screen size as everyone else so you don't stand out. And that's what this does.
Indeed, but not revealing the screen size is super easy. Just turn off javascript (except, maybe, for a whitelist of 2 or 3 sites where you really need it).
It's more a case of the web being used in ways in which it wasn't really originally intended. Of course developers can implement things poorly and create problems (and often do) but demand for things like responsive sites is user-driven in my experience.
If you don't understand how the web works and actively dislike the community I don't understand why you keep commenting here.
I recommend the privacy.resistFingerpriting about:config mentioned. It's been available for a while and does other things too, like changing your user agent.
I've been using privacy.resistFingerprinting for a while and also recommend it, but there is one major "side effect": your reCAPTCHA score will drop to 0.1 making many websites really tedious to use. It's a price I'm willing to pay though...
reCAPTCHA is a Google thing so it gets blocked in my browser already anyway (by uMatrix). If I need to load it to see a website, I close the tab immediately and go somewhere else.
I use a mobile app and have gotten errors saying I need to solve a captcha. Since I can't do that on this app, it just means I'll stop commenting for a couple days until HN decides to stop bothering me.
That's happened twice, and it really hasn't been a big deal. I've never actually done a captcha for HN, it's just not worth it to me.
I personally take the route of training the algorithms poorly. It wants me to select streetlights? Fine, I'll pick all but one streetlight and maybe even a tree. Do I have to click more boxes because of this? For sure. Is it worth it? Yep.
Just another one of 10 thousand reasons why reCAPTCHA is utter cancer on the web.
I've been traveling a lot for the past few years and public wifi combined with a bunch of other things makes _my_ reCapatcha ask me to answer multiple queries every single time.
By this point if I see reCaptcha I just close the window unless it's something _really_ important.
Also reCaptcha isn't any more secure than any other recent captcha solution when it comes to bot protection FYI.
Is there any reason this isn't on by default? I don't know exactly how it works, but to my understanding anti fingerprinting tech generally works better when everyone uses it (otherwise you stick out as the "anti fingerprinting" browser)
> "because the system struggles to determine who you are. "
Is the tile fade-in also the system struggling to figure out who you are? No, it's vindictive. Punishment for not browsing the web the way google wants you to.
No, sorry but that's nonsense. reCAPTCHAv2 doesn't do the tile fade in unless your v3 score is low. The fade in I'm talking about can take one to five seconds per tile, plainly designed to annoy the user.
(Note that you do not need to be on a shared IP to experience this. Merely using firefox with resist fingerprinting and a adblocker is enough to trigger this behavior on an IP google normally 'trusts' when using chrome.)
This happens to me all the time at home, I use Safari with uBlock Origin and reCaptcha never fails to decide to fuck with me.
I have a static IP address, and Google will happily give me a not-dick captcha if I use another browser without anti-tracking features enabled (even with a clean profile).
In addition to what other posters have said I noticed it hides timezone so messaging apps had timestamps which looked wrong because it thought I was somewhere else.
Haven't used it in a while though so this may have been changed.
Aside from the downsides mentioned in other comments, this significantly reduces JS timer accuracy which will make games and WebGL laggy and unusable.
In about:config if you search for 'resistFingerprinting' there seem to be sub-settings which you can tweak to disable the timer modifications, but even after tweaking them I wasn't able to get performance to be as smooth as when resistFP was completely disabled.
Changing your user agent will do terrible things to captchas. I actually change my user agent specifically to test the fail state of captchas. I'd suggest only turning that on if you know what you're doing.
Now it's used for tracking and training proprietary road image recognition for some mega corporation that had a slogan of "do no evil", which is quite hysterical really.
> it seems like it would mostly not do anything when using a tiling WM with fixed splits
In the bug report [1] it says:
> We haven't yet landed this feature in Tor Browser for at a few reasons:
> - ...
> - * Tiling window managers on Linux are hard to detect. Any implementation will need to behave appropriately for those.
This is horrible from a UX perspective. There are many fingerprinting techniques besides this. I don't see how adding a user hostile behaviour will help.
Fortunately, this is entirely optional. The "privacy.resistFingerprinting" option bundles a set of features that make it more difficult for sites to uniquely identify the user at a cost to usability. It's up to each user to determine whether that usability cost is worth the privacy improvements. On Firefox, it's an opt-in setting, off by default.
Easy way to do ad networks host blocking. The real problem is canvas fingerprinting, but that's an inherent issue of the whole graphics stack so everything depends on your freak-out factor.
They should turn fingerprinting to max and install uBlock origin by default.
It might mean millions of FF users would suddenly struggle with captchas, but it might also mean that site creators just stop using reCaptcha and similar.
Or, most 'regular' users will switch to Chrome because reCAPTCHAs work better in that browser. Firefox needs to make sure they're not going to ruin the user experience by breaking sites like this
Microsoft just switched the rendering engine for Edge to Google's rendering engine, helping increase the monoculture.
Despite all Microsoft's posing and snuggling up to the Open Source world, they're just as bad as Google and Facebook if given the chance. (And they have been as bad in the past.)
That's just a bet, I doubt Mozilla is willing to blast google on the chance there is someone willing to pay them, they'd probably rather have a sound financial cushion from non-google before going after google.
Google isn't going to bother the general public like that, that’s limited to small groups like techs who block the canvas fingerprinting. Do you think Google is going to spam people that use the Safari default intelligent tracking protection?
But I've already got code in my xmonad.hs that clamps firefox windows to common monitor sizes?
It's truly unfortunate that browsers just punted on security, dumping endless amounts of sensitive information into a purported sandbox. Why bother developing something with a secure mindset to begin with, when you can just band-aid on patches later?! It's the sendmail/ActiveX philosophy all over again, only now with network effects.
Why can't the ad industry just accept that there are some people out there who don't want to see ads and wouldn't click on one to begin with? Then they can honor Do Not Track and those who choose to work in adtech can start working on things that are more productive to their business.
That's definitely true of the ones that hijack browsers and attempt to trick users into installing malware.
I have and do pay money for content. For those who don't offer that option I've got uBlock Origin as ads (alongside privacy) also present a security issue.
You can opt out of ads, actually, and those will be respected. And companies in the industry knew they'd play by the rules or suffer.
I was in favour of DNT and made a little browser extension that would allow you to DNT some sites and not others. My hope was that eventually we'd be in a state where you could signal to the site right away whether you'd accept tracking or not and the site could paywall you if you didn't. That way it's a "pay with your data or money" loaded into the User Agent and I think I like that. It respects user choice.
I've been using FF with resistFinterprinting on since it was available. Letterboxing does break a lot of websites and apps, sometimes making them unusable due to incorrect positioning and scaling of the elements.
Firefox is already valuable for browsing on mobile phone, where there is not much space on screen to have Dev console anyway.
I recommend trying Firefox Mobile
At this point Firefox should just merge with Tor if they want to market themselves as the pro-privacy browser. Right now I just use Chrome when I'm using my real identity for work and shopping and social media anyway as it's a very good browser and supported everywhere and has an open source version through Chromium.
When I need actual privacy, I just use Tor which supports most sites and is way more protective of my privacy than firefox. May switch to Brave in the future for this use case as they're adding Tor support but right now Chrome + Tor every once in a while works best for me.
The Tor Uplift process later continued in Firefox 55 when Mozilla added a Tor Browser feature known as First-Party Isolation (FPI), which worked by separating cookies on a per-domain basis, preventing ad trackers from using cookies to track users across the Internet. This feature is now at the heart of Project Fission and will morph into a Chrome-like "site isolation" feature for Firefox.
Why not simply allow the user to control the js apis that are available/enabled, kind of like the camera/mic permissions? If sites simply cannot use the mouse events or window size events, they won’t be able to fingerprint. This grey box alternative seems like a complicated hack.
The problem is this will straight up crash many important sites. In the battle between usability and privacy, usability wins. Just try disabling javascript or cookies and see how long you last.
This is awesome. Other things I'd like to see added directly to Firefox are things like Ad and script blocking, HTTPS everywhere, and maybe something like a Tor button so that I don't have to rely on third parties for these critical privacy features.
What's the canvas fingerprinting one do? From what I (very poorly) understand, Tor returns a constant number for fingerprint requests. Can this be done for other requests?
It prompts the user to decline a site from accessing data from the Canvas API. This data can uniquely identify the user's computer. The Firefox feature is identical to the one from the Tor Browser.
While I welcome another way to fight the constant tracking that we've come to know and love, this is, in my case, a break of workflow [1].
I do responsive web design, and spend a considerable amount of time resizing my browser window as a "cheap" way of previewing how it would look on narrower screens. Having the resize snap to multiples of 100 or 200px would make this experience horrible. Disabling it on localhost (where you're supposedly in control of what goes in and out the browser) could be a solution.
All of these sorts of features are disabled by default, and only enabled if you enable the privacy.resistFingerprinting setting in about:config (or install an extension that does that). Among normal users, this letterboxing feature especially would upset almost everyone (though not as much as the Tor feature it’s inspired by), so I cannot imagine it ever being enabled by default.
I also use that, but I find it faster and more straightforward to just drag the window edge, especially when devtools are not opened. One example is using browser and editor side-by-side on a single macOS split-full-screen.
I know you're not asking for advice, but in this case I would use LiveReload against multiple windows covering the breakpoints. Then you'll only need to wiggle edges to check breakpoint transitions.
You can open responsive mode via a keyboard shortcut without needing the dev tools open; on Windows it’s Ctrl+Shift+M, no idea about macOS but it’s doubtless easy to look up.
> Firefox's letterboxing support doesn't only work when resizing a browser window but also works when users are maximizing the browser window, or entering in fullscreen mode
Brilliant. All you have to do is change your window size for every site you visit.
The reason it's fingerprint resistant is because lots of other people will have the same reported screen size. Not because a different screen size is reported to different sites.
> Since we don't believe it's feasible to provide some mode of Chrome that can truly prevent passive fingerprinting, we will mark all related bugs and feature requests as WontFix.
I haven't read all the analyses in the links in that article, but this sounds defeatist and lazy, much unlike a stance that Chromium would take on security or performance on the web.
Contrast the above with what this article says about Firefox:
> Firefox's upcoming letterboxing feature is part of a larger project that started in 2016, called Tor Uplift.
> Part of Tor Uplift, Mozilla developers have been slowly porting privacy-hardening features developed originally for the Tor Browser and integrating them into Firefox.
If you value online privacy, your best choice is Firefox (though it requires some additional manual configuration). Safari comes second (its extensions directory could use more love). The choice where you can add more of your influence to is Firefox — by using it, evangelizing it and by donating (if feasible) to it.
[1]: https://chromium.googlesource.com/chromium/src/+/master/docs...