Hacker News new | past | comments | ask | show | jobs | submit login
Firefox to add Tor Browser anti-fingerprinting technique called letterboxing (zdnet.com)
720 points by commoner on March 6, 2019 | hide | past | favorite | 209 comments

Every time there's something about online privacy with browsers, it's mostly Firefox or Safari. I wondered if Chrome had resisting fingerprinting on its radar (guessing that it wouldn't be in Google's interests to add any feature that would thwart profiling users online), and I found this [1] confirming my guess (emphasis mine):

> Since we don't believe it's feasible to provide some mode of Chrome that can truly prevent passive fingerprinting, we will mark all related bugs and feature requests as WontFix.

I haven't read all the analyses in the links in that article, but this sounds defeatist and lazy, much unlike a stance that Chromium would take on security or performance on the web.

Contrast the above with what this article says about Firefox:

> Firefox's upcoming letterboxing feature is part of a larger project that started in 2016, called Tor Uplift.

> Part of Tor Uplift, Mozilla developers have been slowly porting privacy-hardening features developed originally for the Tor Browser and integrating them into Firefox.

If you value online privacy, your best choice is Firefox (though it requires some additional manual configuration). Safari comes second (its extensions directory could use more love). The choice where you can add more of your influence to is Firefox — by using it, evangelizing it and by donating (if feasible) to it.

[1]: https://chromium.googlesource.com/chromium/src/+/master/docs...

Here's a good comparison: Android Chrome's user agent:

Mozilla/5.0 (Linux; Android 6.0.1; SM-G928F Build/MMB29K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Mobile Safari/537.36

Versus Android Firefox's user agent: Mozilla/5.0 (Android 9; Mobile; rv:66.0) Gecko/66.0 Firefox/66.0

Note how the Chrome browser announces your phone model and software build version to the world.

With regional models with carrier customised software builds, simply the user agent can be used to fingerprint a user.

It's ironic that in a time when the browser used matters less than ever thanks to good HTML5 standard support and differences lie more in the fringe, most recent corners of the living standard, the user agent strings are more detailed than ever.

It would help if the browser vendors, particularly Google, took a step back and spent some time thinking about why user agent strings were invented. They're more like cludges to know how to present a web page, which are less needed nowadays because you can request things like mobile device dimensions, provide HiDPI resources that are only used in case they are needed, provide entirely different views depending on mobile or web, etc. All without peeking at that ugly string. Beyond that, we have polyfills and frameworks that guarantee cross-browser compatibility and minimum supported versions, again without resorting to peeking at browser engine build numbers or worse because the detections are now largely integrated in the standards themselves.

Actually browser compatibility is not getting better, it is getting worse.

I was working for a firm last year that made a system with a browser front end that only supported Chrome and Safari, not Edge or Firefox -- this is happening everywhere, and it is why MS threw in the towel with Edge.

I don't think that anecdote fares well against the many ways in which browser compatibility has improved

Browser compatibility is better in early 2019 than I think it was in early 2011. I think it is worse than it was in early 2018.

Browser compatibility is improving because the incompatible browsers are dying off.

Yup it's disgusting. Even in private mode sites know it's me with my build version and ip.

To be honest most people are confused about "private mode", I agree there should be privacy options enabled by default with it, but the reality is pretty much "no browser history will be stored (locally) and your session / cookies will be isolated between private mode and "normal mode"

Indeed. It's intended as a method of watching porn without having it pop up in the address autocomplete later when your kids are trying to go to the Peppa Pig website.

I don't expect serial numbers to be sent even in public mode. Android build plus ip might as well be a serial number.

Google still knows my precise location sometimes even when I'm connected to a VPN. The browser itself uses more (Wifi network, phones linked to your account) than just your IP to determine it, and there's no way to control it.

> guessing that it wouldn't be in Google's interests to add any feature that would thwart profiling users online

I would actually think the opposite. Wouldn't it be better because then only Google would have that information? Only Google would be able to fingerprint. This is of course under the assumption (which is currently accurate) that Google has the majority share of browsers. But maybe it wouldn't be, because it would teach others how to thwart their fingerprinting.

I suspect that's the biggest reason Google was so interested in "https everywhere". That removed detailed browsing visibility from a lot of entities, but not Google.

I've seen this tendency on HN - if this person/entity does something, I suspect it must be bad or selfish.

On the subject of HTTPS - do you think the interests of ordinary consumers are in any way served by continuing on HTTP? Countless websites, even those accepting login credentials used to think that it was acceptable to not take the trouble to set up HTTPS. The only thing the operators of these websites cared about was being marked "insecure" by the most popular browser. The shift to HTTPS was a definite win for privacy for every person who uses the web.

But no. Apparently, the "biggest reason" for the people working at Google pushing this because it was good for Google. They didn't care about all the benefits for end users, they only cared about themselves.

It's sad that slander like this can become mainstream view in a forum like HN.

Disclaimer - no connection to Google in any way. Don't even own Google stock.

> if this person/entity does something, I suspect it must be bad or selfish.

Isn't that the basic assumption that whole our economy rests upon?

"I've seen this tendency on HN - if this person/entity does something, I suspect it must be bad or selfish."

Companies tend to do things that advance their competitive advantage.

"slander" - really? For "I suspect", followed by a theory that an online ad company might bandwagon something (otherwise good) that helps them for less than altruistic reasons?

Google has long held the stance that anything that's good for the web is good for Google. Google is nearly ubiquitous online these days, so people spending more time online will almost certainly benefit their business.

That's why they don't need any ulterior motivation for efforts like HTTPS Everywhere, Google Fiber, or Chrome. The very fact that those technologies make the web safer, faster, or more powerful already benefits Google indirectly.

> Google has long held the stance that anything that's good for the web is good for Google.

Not denying that, but this could be reversed and would still make sense: Google has long held the stance that anything that's good for Google is good for the web.

Google is a corporation, the only goal of corporation is to make money.

So yes, that is the biggest reason - their business objective. It just happened that it also did something good for everyone.

> only goal of corporation is to make money

A corporation can set any goal the founders please. Making money need not be the dominant one. Obviously making enough to stay solvent helps with longevity.

Have you worked in one (or more than one) and observed it's high-level decision making? It's about money and nothing else. This is not unique or special and the only way you'll ever truly believe it is any different is if you make the grandly naive mistake of listening to the marketing department (either internal or external).

Alphabet is a publicly traded company and the officers have a fiduciary duty to do what’s best for their shareholders. This normally means maximizing profits. A corporate officer could make a case to shareholders that not maximizing profits in X area could have Y long term benefit, but that’s not very common these days.

Of course, the officers of public companies try to do what’s best for shareholders. But here’s a different take on it by Tim Cook five years ago [1] (interpret it however you’d like):

> "When we work on making our devices accessible by the blind," he said, "I don't consider the bloody ROI." He said that the same thing about environmental issues, worker safety, and other areas where Apple is a leader.

> …

> He didn't stop there, however, as he looked directly at the NCPPR representative and said, "If you want me to do things only for ROI reasons, you should get out of this stock."

[1]: https://arstechnica.com/gadgets/2014/03/at-apple-shareholder...

Maximising shareholder value is very often not what's best for shareholders. The current popular myth that it is somehow mandatory has been coincident with a definite change for the worse.

Hasn't Amazon been notable for actively avoiding profit through much of its existence?



They reinvested their profits in themselves. That's not really avoiding profit (and I think there is a definition of profit that includes the company gaining value)

> Apparently, the "biggest reason" for the people working at Google pushing this because it was good for Google.

I mean, when I do stuff at my job, I also do things that are good for the company that pays me. That's the job, right?

The problem is that Google has positioned itself in a way where things that are good for Google might be bad for humanity as a whole.

Stating that the push for https is bad for humanity is a preposterous statement. Please back it up with some facts.

I think calling it the "biggest reason" is wrong but not necessarily entirely wrong. There's a difference between Google's reasons and your benefit.

It could simply mean the interests aligned. I get my credentials encrypted and Google gets more [insert stuff here]. When the interests don't align the result is a bit different. You could be sending tracking/location data every few minutes, adblockers could be crippled, you could be forced to sign in to sync Chrome data, etc. Usually the very same stuff Google relies on for revenue.

So what I'm saying is that it's not that outlandish to assume Google had more reasons than your benefit.

"Biggest" doesn't mean only. I don't see a more compelling motive as the top one.

Sorry, but there's really no reason to force HTTPS on static webpages, news sites, or any site where you're just browsing/consuming.

Google is smart enough where they could've enforced the penalty on sites that should use HTTPS, such as those with any kind of form submission or login. But they chose not to without much of a reason, and that's why I can believe it helped some hidden motive of theirs.

> Sorry, but there's really no reason to force HTTPS on static webpages...

I downvoted you because this is wrong. There are at least two good reasons for it:

1. It prevents MitM attacks. For example, some ISPs and free WiFi APs inject arbitrary code into unencrypted pages.

2. It improves privacy by hiding the request headers, including hostname.

Can you elaborate on that? Are you talking about the https everywhere extension from the EFF, because I wasn't aware Google had a part in that.

Google is steadily making it harder for users to use an http website.

They’re talking about Chrome’s (and other browsers’) marking of http sites as insecure, and Google-as-a-search-engine penalising non-HTTPS sites. These are good things.

I don't agree with him but Google penalizes non-HTTPS results etc.

Serp positions and browser features, like: https://security.googleblog.com/2018/02/a-secure-web-is-here...

There’s been a lot of press recently about shady data brokers selling information to Facebook. I would be genuinely surprised if those same data brokers don’t sell to google.

(In addition to full time data brokerage firms, there are third party phone app api’s that slurp data and feed it to advertisers. Facebook has at least one. Presumably google does too...)

Google does buy data for sure. One example: https://www.bloomberg.com/news/articles/2018-08-30/google-an...

Yeah, with a google account and google login on every website they still can track everyone even without all these other methods.

Firefox is a good start, then it's also worth checking your browser with the EFF Panopticlick to see how your settings affect your uniqueness.


There is a way to stop fingerprinting. That way is serving pages via distributed network (over a WoT or torrent-like thing).

All these other ways do is give people the illusion that they're safe from being tracked, when the reality is that they're tracked just the same, but by fewer people so the data is more valuable. This means that the money is centralizing around the actors with the most inexplicable methods of tracking; which are almost always the worst actors.

I hate it too, even though I'm not blameless. It's impossible to compete without a level playing field, and that playing field needs to be technically enforced, because otherwise we get region shopping and advertising / analytics models that push people to create intractable mechanisms so they can paper over how tracking fed into it.

For example, imagine a world where I'm bidding to show an ad to a visitor of nytimes.com. Now, I may not track the user, but if anyone is, they can incorporate what they know and sell that traffic back to me on a CPA model. All I see is the incoming traffic. I don't track anyone (wink, wink) but there is no difference.

In the long run this will either be solved one way or another, and all these online surveillance capitalism companies will crash and burn. Either we get a web with technical guarantees or we get a balkanized internet where every state makes their own weird laws about what is allowed or not.

Protocol wise, basic shared VPNs will stop most everything short of a semi-global passive adversary. The problem is running hostile code on your own machine, coupled with browser makers thinking it is a generally fantastic idea to allow that hostile code to access a whole slew of security-sensitive information.

Sure, VPN won't repudiate the region bullshit, and can even be outright blocked. But if adoption rose to the point websites didn't want to lose the traffic, those would diminish.

However, I do agree a non-immediate user-centric protocol is sorely needed though, especially to stop those global snooping adversaries.

>and can even be outright blocked.

That's my biggest fear. Yesterday I woke up to find out that Three (a big telco provider in the UK) had blocked Mullvad's API path as 'adult content'. I had to physically go to a store and verify that I was an adult to be able to use my VPN - the very same VPN I use because I have so little trust in the UK Gov (I'm not from here and am not staying long term, but hate that the society reminds me of 1984 regularly),

I was referring to blocking by websites, as there is actually some competition between them. For example, a few major ecommerce sites frustrate or outright block traffic from my VPS, so I've simply stopping visiting them rather than poking holes.

A similar dynamic is present for ISPs wherein if most people expect "Internet access" to mean something that works with VPNs, then ISPs can't block it [0]. But that's a harder state to reach as there's much less competition between ISPs. A commerce site wouldn't want to forgo 10% of their business, but a government/quasi-government has no problem demonizing 10% of their subjects.

Which is why we ultimately do need non-real-time protocols and namespaces for the bulk of communication.

[0] Nagging systems like the UK would still exist, but they couldn't progress further to outright banning it.

I'm pretty sure sticking it on a distributed network is potentially much worse unless it is specifically designed to prevent tracking.

> In the long run this will either be solved one way or another, and all these online surveillance capitalism companies will crash and burn.

I sincerely hope that happens without causing more harm to people. It also seems to be a long way away.

> The general idea is that "letterboxing" will mask the window's real dimensions by keeping the window width and height at multiples of 200px and 100px during the resize operation --generating the same window dimensions for all users-- and then adding a "gray space" at the top, bottom, left, or right of the current page.

> The advertising code, which listens to window resize events, then reads the generic dimensions, sends the data to its server, and only after does Firefox remove the "gray spaces" using a smooth animation a few milliseconds later.

Would using a setTimeout() on the window resize event bypass this? Send the data 20-50ms after resize is completed giving enough time for the letterboxing stuff to go away revealing the actual dimensions, or something? They say it only blocks the dimensions during the resize event and FF removes the letterboxing "a few ms later"

Presumably the implementation is smarter than being defeated by this easy trick, but I too wonder how it works.

> Finally, an extra zoom was applied to the viewport in fullscreen and maximized modes to use as much of the screen as possible and minimize the size of the empty margins. In that case, the window had a "letterbox" (margins at top and bottom only) or "pillbox" (margins at left and right only) appearance. window.devicePixelRatio was always spoofed to 1.0 even when device pixels != CSS pixels.

So presumably the window size is not being reset to real size - firefox just does a smart zoomin. In other words the fake size remains throughout entire session.

> Presumably the implementation is smarter than being defeated by this easy trick, but I too wonder how it works.

I wouldn't make too many assumptions. Browser vendors have overlooked seemingly "simple" things in the past [1].

[1]: https://news.ycombinator.com/item?id=13329525

> Would using a setTimeout() on the window resize event bypass this? Send the data 20-50ms after resize is completed giving enough time for the letterboxing stuff to go away revealing the actual dimensions, or something? They say it only blocks the dimensions during the resize event and FF removes the letterboxing "a few ms later"

No, it will be a setTimeout on the document load event that will poll the window size every 100ms from here till the page is evicted by a close or navigation event, increasing the detrimental effect of adtech.

Havent they thought about not broadcasting the window size... wtf. We are doomed apparently.

Apps need it to determine where to place elements.

If it wasn't you would still be able to reverse engineer it by sticking elements outside the viewport and seeing if they're hidden or not.

Turns out anonymity is super freaking hard. :-/

Some would take enhanced privacy over properly-functioning sites. I wonder how broken sites would appear if the browser simply lied about such things.

Wonder no longer. There is an add-on for FF called canvasblocker.


I have been using it and it makes some websites go nuts and Google CAPTCHA takes forever.

There’s like a billion side channels to determine how big the screen is unless you just want to entirely break basic css. Which is a pretty unreasonable way to address this problem.

I block CSS altogether on most sites with uMatrix, so I do not think it is that unreasonable.

Doesn't that make most sites unusable?

Surprisingly, most sites are perfectly usable with CSS disabled. They end up looking a bit like "motherfucking website"[1], or what you see in a text-based web browser.

[1] https://motherfuckingwebsite.com/

Disabling both CSS and JS actually works around usability issues on a bunch of sites ¯\_(ツ)_/¯

It's like reader mode, except it works on more sites.

Wouldn't loading all external links right away (think background-image) solve this? How does the site exfiltrate the gathered information without javascript or tracking pixels?

Edit: Having a bunch of html buttons/links, showing a different to agents based on their resolution and waiting to see which ones they follow would break this, unless everyone crawls a lot of stuff they don't need. Pretending to be one of a few common sizes is probably a better solution.

People with that preference usually just turn off javascript.

...and they would still be susceptible to CSS-based fingerprinting.

How? Admittedly my knowledge of CSS is dated, but without scripting enabled you can't set cookies, make auto server request, or even auto set an external CSS file (that could be served and counted)..

Its not something I've considered before and i am genuinely curious how this would work.

You can make server requests by loading images and fonts. Browsers only load those resources they actually need, so there's lots of opportunities for conditionally triggering requests. Media queries for window size, fallback fonts to check installed fonts, css feature checks to make guesses at the browser type, ...

oh my god, what a world we live in!

> Apps need it to determine where to place elements.

Could they hide the actual window dimensions from website javascript by only allowing a special kind of sandboxed function to access it? The website's code only really needs to do arithmetic on those values, so the browser could deny access to the actual values and force the code to manipulate them symbolically.

If I'm allowed to query the position and/or size of anything else in the DOM I can figure out window size by aligning elements at the edges or making one 100vw x 100vh and querying the position/size of those, so you really can't let me access the position or size of anything. I might have elements styled based on media queries, or old-fashioned DOM queries, so if I'm allowed to change how a button looks based on window size I can then check something about this element that isn't directly related to size or position. For example it doesn't make since to have a "download the app" button on desktop, but if you let me make it invisible then you can't let me query the visibility of it. This is true of all styling, if you let me derive it from vh/vw then you can never let me query it after that, which makes a lot of things tricky. Trading functionality that relies on DOM/media queries for privacy is totally valid, I'm just saying that it will make some non-obvious things impossible for a developer to do, and there are sites today that people enjoy using that will have their core functionality broken if this is the future. Browser-based CAD tools were recently discussed on HN, and those are right out. Really, I think the future is both, but I'm not quite sure how they'll coexist.

> Trading functionality that relies on DOM/media queries for privacy is totally valid

Perhaps it should be a site-specific permission like the microphone or camera. Your generic news site doesn't need that functionality (and shouldn't ask for the permission - you'd know something shady was going on) but your browser-based CAD tool would and you'd grant it there.

This will cause a permissions fatigue. Only the most sensitive things should have permission. The usage of these capabilities is large enough that it should not be behind a permission.

If we went down this path, I think that the any permissions dialog would come at the end of a very long PR campaign and feature ratcheting to get developers to update their sites to not need the permission unless absolutely necessary. Sort of like what's happened with the deprecation of Flash.

Are you also going to download every resource listed in every @media section of the CSS regardless of screen size?

That part doesn't seem too unreasonable to me, but you could also just go with the largest available size and then scale it as necessary on the client.

Scaling down the largest size isn't always appropriate (though would probably work in most cases).

One example might be a set of images where the smaller images wrap text more agressively to work better on a screen that's not as wide.

Good point, I've seen comparable use cases in the wild but it slipped my mind!

The browser could pick a fake screen size, and behave in a way that is consistent with that fake screen size. This would probably break many sites, but it would mitigate fingerprinting if a common size was used.

This solution is just begging for side-channel attacks.

Firefox had better make sure its timing is not affected by such shenanigans, for instance.

I doubt that is avoidable, as the browser would still probably need to render at the false viewport dimensions. For a common adversary, fingerprinting based on timing would be more involved and less useful.

I don't get it. Don't the majority of people browse at full screen, on common devices which all have the same fullscreen dimensions?

Who out there browses to a size, resizes there window, then browses to another website, then resizes again and so on? That makes no sense.

Even if they do, there's variation in what "full screen" means. Some people have the bookmarks toolbar enabled, others don't. Some have compact icons, others don't. Some keep a sidebar open, others don't. Some use a theme that changes the size of toolbars. Some have a dock/taskbar always visible, some have differently sized taskbars, etc.

This all leads to a huge variation between users of even the same screen size (e.g. 1920x1080), since the portion of the screen available to the page is different.

The Tor browser fixes this by having the window always be the same size on all machines, regardless of screen resolution. This is a bit annoying because it means you have less stuff on the page visible at a time, but since it makes you look the same as every other user, it's worth it for privacy conscious users.

As a user with an ultrawide monitor I have several browser windows open and arranged in various configurations at all times and often resize them.

Probably not a super common scenario, but not ultra rare either.

Yes. I'm in adtech. 60% of browsers are mobile/tablet which are already fixed. The rest are almost always fullscreen. Maybe 2% have non-standard sizes.

When you two say fullscreen, surely you mean maximized? I imagine a sizeable fraction of users don't even know how to fullscreen a window.


Fixed except when the user enables android split-screen mode! I believe split-screen mode implies the height of the browser window can change at runtime (in a JS visible way), but haven't looked at it recently.

I'm not sure what percent of people customize their dock height on macOS, but that setting uses a slider, which would cause a bunch of unique heights for a maximized browser.

The OS chrome between users varies a ton. Each taskbar, dock and titlebar can have their own size. In my case I'm using a window manager without decorations, so I don't even have a titlebar!

I would actually really like an answer to this question, I’ve often thought about it!

Huh, I thought the original was a sarcastic question. In that case, let me explain:

I keep a browser window open at all times. It is never full screen, because if it were full screen I wouldn't be able to see multiple windows at the same time.

I keep my browsing window as close to 1024x768 as possible. In 2019, a lot of websites can't handle a browser window using a mere 75% of the laptop screen, so they either render incorrectly or, worse, switch to a mobile view. When that happens, I either blacklist the website forever in a contemptuous fervor, or just resize the window. Apparently, this resizing action is trackable.

When I say "as close to 1024x768" as possible, I mean exactly 1024x768 unless I have resized it and forgotten. I use a little AppleScript thing to resize it to 1024x768, precisely for browser fingerprinting reasons. When you resize the window by hand, you typically end up with a VERY unique window dimension.

The privacy.resistFingerprinting option will always launch your browser at exactly 1000x1000 size. It's probably preferable to your script.

Thanks for the answer. I just thought people usually kept their windows at full screen, but reading all the replies perhaps I am the outlier here!

Even if you had 100 users with 1024x768 resolution for their screen they can be fingerprinted further because of small differences in the browser. Zoom setting, toolbar size, bookmarks button showing, full screen mode, small icons, additional toolbars, task bar auto hide, larger than standard taskbar all affect the viewable area of the browser and this is what the site operator or analytics will see.

Browsing in full screen is just a waste of space on my preferred devices.

Does that matter? Don't devs cater to the outliers?

It matters because if 99% of people have the same 5 configurations and only the outliers are identifiable, then this method would not be as valuable for spying as it is reported to be.

This! So what’s the answer??

Would something like Perl's taint functionality work? I.e., all values derived from size, position, colour, pixel data, user agent, etc. are marked as tainted, and are stripped (or randomized or replaced with default values) from data that is sent over XMLHttpRequest and other communication methods. It's probably extremely hard to make that watertight though.

Even if it was implemented perfectly, you could work around that using timing side channels.

For example, multiply the value (e.g. window width) by some huge number, perform a slow operation in a loop that many times, and finally clear a flag. Meanwhile another thread is filling an array one by one until the flag gets cleared. The last non-tainted index in the array indicates your approximate window width.

That would make it difficult to serve different sized images to different sized screens

> Apps need it to determine where to place elements

Annoying apps which control the layout in JS instead of letting the browser do it will need it.

>Apps need it to determine where to place elements.

This determination can't be done client-side? In other words, if I resize the window, it's going to send the new size to determine where to place the elements in the "new" area?

document.Write("<img src=width.png?" + document.innerWidth + ">")

That's a problem.

The W3C should probably create a new, rich spec hundreds of pages long so that frontend developers may instead declare images as a unitless set of point relationships to be rendered at any resolution without digital artifacts.

For example, instead of working on the pixel level, the developer would be free to simply declare, "an arc may exist in one of these four locations." Then, merely by declaring two further "flag" values, the developer can communicate to the renderer which three arcs not to draw, except for the edge case of no arc fitting the seven previously-declared constraints.

Just imagine-- instead of a big wasteful gif for something as simple as an arc animation, the developer would simply declare, "can someone just give me the javascript to convert from arc center to svg's endpoint syntax?" And someone on Stackoverflow would eventually declare the relevant javascript.

The browser can also ship with a pretrained GAN, so the site just asks for a picture of a cat and then the GAN creates one as needed, but nobody will know exactly which cat you saw.

That will be remembered as day when "Catwoman" went from being sexy to being weird and disgusting.

Now I want to train a GAN that takes images' alt text and reproduces the image.

I can see where this is heading. LOL

Yes, but how do you keep the client from then sending that data to the server?

You can't. Even if you block JavaScript, they can still get it from CSS media queries, where the can say which file to download given a certain screen size.

The best solution is to use the same screen size as everyone else so you don't stand out. And that's what this does.

Right, that’s what I’m saying. Keeping the data client side as ggp suggests is not possible.

> Turns out anonymity is super freaking hard

Indeed, but not revealing the screen size is super easy. Just turn off javascript (except, maybe, for a whitelist of 2 or 3 sites where you really need it).

> Turns out anonymity is super freaking hard. :-/

If you insist on letting random people run code in your document reader.

apps? do we consider websites apps now?

but either way if you have the JS, CSS and HTML, you should know where to put elements.

Are nyc (news yc com) people part of the problem?

What CSS file did the browser fetch? The one for screens less than 500px wide? Or the one for screens that are 504px wide?

There are a million ways to exfiltrate UI parameters through JS and CSS. It’s hard to both prevent that and still allow JS and responsive pages.

just grab them all... not a huge deal, they are so small.

Okay. Those different css files all specify different images on the server, depending on the media query. Are we downloading all those images as well?

right... developers suck, overall ( I could not reply to the comment below because nyc would not let me)

It's more a case of the web being used in ways in which it wasn't really originally intended. Of course developers can implement things poorly and create problems (and often do) but demand for things like responsive sites is user-driven in my experience.

If you don't understand how the web works and actively dislike the community I don't understand why you keep commenting here.

it's all good my little monkey.... also tell me why most websites use files from 20 different domains if you understand the web so well....

I recommend the privacy.resistFingerpriting about:config mentioned. It's been available for a while and does other things too, like changing your user agent.

I've been using privacy.resistFingerprinting for a while and also recommend it, but there is one major "side effect": your reCAPTCHA score will drop to 0.1 making many websites really tedious to use. It's a price I'm willing to pay though...

reCAPTCHA is a Google thing so it gets blocked in my browser already anyway (by uMatrix). If I need to load it to see a website, I close the tab immediately and go somewhere else.

Do you never buy anything online or log into anything? They're blocked for me too but I will unblock them temporarily when the need arises.

You need reCAPTCHA to log into HN (or at least I do when I'm working from some parts of the middle east)

I log into HN almost everyday, each time from a different location (Tor) and I've never seen a reCAPTCHA on HN.

Sounds like you might have been going through some sort of proxy...

I use a mobile app and have gotten errors saying I need to solve a captcha. Since I can't do that on this app, it just means I'll stop commenting for a couple days until HN decides to stop bothering me.

That's happened twice, and it really hasn't been a big deal. I've never actually done a captcha for HN, it's just not worth it to me.

Which HN app are you using? Materialistic (Android) and Hackers (iOS) have worked for me with no CAPTCHA issues, even with a VPN.



> ne major "side effect": your reCAPTCHA score will drop to 0.1 making many websites really tedious to use.

I instantly leave a website that has this aggressive reCaptcha that uses free labour to train algorithms.

I personally take the route of training the algorithms poorly. It wants me to select streetlights? Fine, I'll pick all but one streetlight and maybe even a tree. Do I have to click more boxes because of this? For sure. Is it worth it? Yep.

>Is it worth it? Yep.

It is not. 100_000s of people got/will get the same picture and marked squares correctly.

Dito. This thing must be boycotted

Making recaptcha worthless by making it so intrusive no one will use it.

I love this idea. Recaptcha is god awful.

It could work if Firefox had as much marketshare as Chrome...

Just install Buster, it'll solve reCaptcha for you (just make sure to set a non-google STT engine)

Just another one of 10 thousand reasons why reCAPTCHA is utter cancer on the web. I've been traveling a lot for the past few years and public wifi combined with a bunch of other things makes _my_ reCapatcha ask me to answer multiple queries every single time.

By this point if I see reCaptcha I just close the window unless it's something _really_ important.

Also reCaptcha isn't any more secure than any other recent captcha solution when it comes to bot protection FYI.

No wonder reCAPTCHA sucks so hard. I hardly ever run into it, but it's a PITA.

Is there any reason this isn't on by default? I don't know exactly how it works, but to my understanding anti fingerprinting tech generally works better when everyone uses it (otherwise you stick out as the "anti fingerprinting" browser)

It can decimate useability for the average user.

Every time you see a Captcha, you may have to do four or five to be considered "human" because the system struggles to determine who you are.

> "because the system struggles to determine who you are. "

Is the tile fade-in also the system struggling to figure out who you are? No, it's vindictive. Punishment for not browsing the web the way google wants you to.

Or maybe it's to make it obvious that a tile has changed...

No, sorry but that's nonsense. reCAPTCHAv2 doesn't do the tile fade in unless your v3 score is low. The fade in I'm talking about can take one to five seconds per tile, plainly designed to annoy the user.

Here is a video of the fade-in in action: https://youtu.be/zGW7TRtcDeQ?t=89

(Note that you do not need to be on a shared IP to experience this. Merely using firefox with resist fingerprinting and a adblocker is enough to trigger this behavior on an IP google normally 'trusts' when using chrome.)

This happens to me all the time at home, I use Safari with uBlock Origin and reCaptcha never fails to decide to fuck with me.

I have a static IP address, and Google will happily give me a not-dick captcha if I use another browser without anti-tracking features enabled (even with a clean profile).

>Here is a video of the fade-in in action

Okay, I agree that's punitive. I've never seen a fade that slow, so it must be at the lowest levels of trust.

If you use a VPN (a popular one like PIA or ExpressVPN) you will almost always get a fade in that slow. And it is really annoying.

I submit all the time before realising one has started to change. It's terribly frustrating.

In addition to what other posters have said I noticed it hides timezone so messaging apps had timestamps which looked wrong because it thought I was somewhere else. Haven't used it in a while though so this may have been changed.

It'll considerably deteriorate your browsing experience due to various layers of "bot detection" stalking the web.

Aside from the downsides mentioned in other comments, this significantly reduces JS timer accuracy which will make games and WebGL laggy and unusable.

In about:config if you search for 'resistFingerprinting' there seem to be sub-settings which you can tweak to disable the timer modifications, but even after tweaking them I wasn't able to get performance to be as smooth as when resistFP was completely disabled.

Changing your user agent will do terrible things to captchas. I actually change my user agent specifically to test the fail state of captchas. I'd suggest only turning that on if you know what you're doing.

Which means its working well because captcha is a tracking tool.

Remember when first iteration of recaptcha was designed for to help digitize books?


Now it's used for tracking and training proprietary road image recognition for some mega corporation that had a slogan of "do no evil", which is quite hysterical really.

There is no doubt in my mind that recaptcha data will be used to kill people in the future. Its all very useful for drones.

I agree it's wonderful. Helps with fighting trackers

I try enabling this occasionally, but it causes my zoom preference to be forgotten for each site after I close the tab. Seems to be intentional (https://bugzilla.mozilla.org/show_bug.cgi?id=1369357).

I need zoom to not ruin my eyes - is it just too hard to mask the true zoom?

With the letterboxing, it seems like it would mostly not do anything when using a tiling WM with fixed splits. Does that sound right?

> it seems like it would mostly not do anything when using a tiling WM with fixed splits

In the bug report [1] it says:

> We haven't yet landed this feature in Tor Browser for at a few reasons: > - ... > - * Tiling window managers on Linux are hard to detect. Any implementation will need to behave appropriately for those.

So it appears they are still working on that.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1407366

This is horrible from a UX perspective. There are many fingerprinting techniques besides this. I don't see how adding a user hostile behaviour will help.

Fortunately, this is entirely optional. The "privacy.resistFingerprinting" option bundles a set of features that make it more difficult for sites to uniquely identify the user at a cost to usability. It's up to each user to determine whether that usability cost is worth the privacy improvements. On Firefox, it's an opt-in setting, off by default.

What else can they do to decrease the effectiveness of increasingly hostile trackers?

Easy way to do ad networks host blocking. The real problem is canvas fingerprinting, but that's an inherent issue of the whole graphics stack so everything depends on your freak-out factor.

Maybe firefox should also install ublock origin by default?

This isn't just for some power users- it will increase their share among regular people whose pages will load even faster making Firefox popular.

Or are they waiting until the user share falls below 5%? Maybe they should listen to Andy grove and prepare now.

They should turn fingerprinting to max and install uBlock origin by default.

It might mean millions of FF users would suddenly struggle with captchas, but it might also mean that site creators just stop using reCaptcha and similar.

Or, most 'regular' users will switch to Chrome because reCAPTCHAs work better in that browser. Firefox needs to make sure they're not going to ruin the user experience by breaking sites like this

I'm guessing they would lose their financing from Google by adding UBlock.

I don't know the terms but surely there are enough parties willing to pay them- MS would love to shoot with gun on fx shoulders.

The engineering effort required to implement this feature is better spent in doing useful things.

Microsoft just switched the rendering engine for Edge to Google's rendering engine, helping increase the monoculture.

Despite all Microsoft's posing and snuggling up to the Open Source world, they're just as bad as Google and Facebook if given the chance. (And they have been as bad in the past.)

That's just a bet, I doubt Mozilla is willing to blast google on the chance there is someone willing to pay them, they'd probably rather have a sound financial cushion from non-google before going after google.

Nice but the more you want privacy the more CAPTCHA Google will throw at you.

Google isn't going to bother the general public like that, that’s limited to small groups like techs who block the canvas fingerprinting. Do you think Google is going to spam people that use the Safari default intelligent tracking protection?

But I've already got code in my xmonad.hs that clamps firefox windows to common monitor sizes?

It's truly unfortunate that browsers just punted on security, dumping endless amounts of sensitive information into a purported sandbox. Why bother developing something with a secure mindset to begin with, when you can just band-aid on patches later?! It's the sendmail/ActiveX philosophy all over again, only now with network effects.

Why can't the ad industry just accept that there are some people out there who don't want to see ads and wouldn't click on one to begin with? Then they can honor Do Not Track and those who choose to work in adtech can start working on things that are more productive to their business.

That might have worked if Microsoft hadn't enabled Do Not Track by default.

Ads are not always for clicking. If you don't want to see ads than you should pay for content or leave.

I’ll take the third option: blocking every single ad and making my browser as untrackable as possible.

That's definitely true of the ones that hijack browsers and attempt to trick users into installing malware.

I have and do pay money for content. For those who don't offer that option I've got uBlock Origin as ads (alongside privacy) also present a security issue.

The ad industry was ready to honour DNT. MS killed it by going default.

Sure, they gave their pinky promise do not track those, who will set it.

You can opt out of ads, actually, and those will be respected. And companies in the industry knew they'd play by the rules or suffer.

I was in favour of DNT and made a little browser extension that would allow you to DNT some sites and not others. My hope was that eventually we'd be in a state where you could signal to the site right away whether you'd accept tracking or not and the site could paywall you if you didn't. That way it's a "pay with your data or money" loaded into the User Agent and I think I like that. It respects user choice.

I've been using FF with resistFinterprinting on since it was available. Letterboxing does break a lot of websites and apps, sometimes making them unusable due to incorrect positioning and scaling of the elements.

If Firefox could make their Dev tools as good as Chrome's, I would switch immediately.

Firefox is already valuable for browsing on mobile phone, where there is not much space on screen to have Dev console anyway. I recommend trying Firefox Mobile

Firefox on Android won my heart with extension support, which means I can have an ad blocker (uBlock Origin, no less) without root

You using dev tools on all websites you visit?

At this point Firefox should just merge with Tor if they want to market themselves as the pro-privacy browser. Right now I just use Chrome when I'm using my real identity for work and shopping and social media anyway as it's a very good browser and supported everywhere and has an open source version through Chromium.

When I need actual privacy, I just use Tor which supports most sites and is way more protective of my privacy than firefox. May switch to Brave in the future for this use case as they're adding Tor support but right now Chrome + Tor every once in a while works best for me.

>May switch to Brave in the future

In the futur? Tor tabs are already a feature of Brave, as of today

It wasn't last time I tried on Linux. Thanks for the headsup.

How will that not break most of js positioning done on window.resize?

Is there any legitimate reason for it? Besides playing pong with browser windows.

The Tor Uplift process later continued in Firefox 55 when Mozilla added a Tor Browser feature known as First-Party Isolation (FPI), which worked by separating cookies on a per-domain basis, preventing ad trackers from using cookies to track users across the Internet. This feature is now at the heart of Project Fission and will morph into a Chrome-like "site isolation" feature for Firefox.

This is just factually incorrect.

Why not simply allow the user to control the js apis that are available/enabled, kind of like the camera/mic permissions? If sites simply cannot use the mouse events or window size events, they won’t be able to fingerprint. This grey box alternative seems like a complicated hack.

The problem is this will straight up crash many important sites. In the battle between usability and privacy, usability wins. Just try disabling javascript or cookies and see how long you last.

If it helps defeat tracking then I’d like an option to snap to pre-defined sizes as I resize the window.

This is awesome. Other things I'd like to see added directly to Firefox are things like Ad and script blocking, HTTPS everywhere, and maybe something like a Tor button so that I don't have to rely on third parties for these critical privacy features.

What's the canvas fingerprinting one do? From what I (very poorly) understand, Tor returns a constant number for fingerprint requests. Can this be done for other requests?

It prompts the user to decline a site from accessing data from the Canvas API. This data can uniquely identify the user's computer. The Firefox feature is identical to the one from the Tor Browser.

Screenshot: https://thehackernews.com/2017/10/canvas-browser-fingerprint...

https://www.torproject.org/projects/torbrowser/design/ (see the "HTML5 Canvas Image Extraction" section)



Thanks, those look like good reads. So I see in my version of FF that it is enabled as true but I don't recall ever seeing a prompt.

I would love to see a blog post about some of these features and why things are difficult.

I'm using Brave for a while they have a function to surf the web through Tor network

which seems pretty safe

I'm glad to hear the Firefox also give a lot of value to privacy

Why not just download/use Tor Browser?

If changes are included upstream, that reduces the maintenance burden.

While I welcome another way to fight the constant tracking that we've come to know and love, this is, in my case, a break of workflow [1].

I do responsive web design, and spend a considerable amount of time resizing my browser window as a "cheap" way of previewing how it would look on narrower screens. Having the resize snap to multiples of 100 or 200px would make this experience horrible. Disabling it on localhost (where you're supposedly in control of what goes in and out the browser) could be a solution.

[1] https://xkcd.com/1172/

All of these sorts of features are disabled by default, and only enabled if you enable the privacy.resistFingerprinting setting in about:config (or install an extension that does that). Among normal users, this letterboxing feature especially would upset almost everyone (though not as much as the Tor feature it’s inspired by), so I cannot imagine it ever being enabled by default.

Why not use the built-in mobile preview tool that allows you to set custom dimensions?

I also use that, but I find it faster and more straightforward to just drag the window edge, especially when devtools are not opened. One example is using browser and editor side-by-side on a single macOS split-full-screen.

I know you're not asking for advice, but in this case I would use LiveReload against multiple windows covering the breakpoints. Then you'll only need to wiggle edges to check breakpoint transitions.

You can open responsive mode via a keyboard shortcut without needing the dev tools open; on Windows it’s Ctrl+Shift+M, no idea about macOS but it’s doubtless easy to look up.

It's the same shortcut on Linux and *BSD and probably everywhere.

> Firefox's letterboxing support doesn't only work when resizing a browser window but also works when users are maximizing the browser window, or entering in fullscreen mode

Brilliant. All you have to do is change your window size for every site you visit.

The reason it's fingerprint resistant is because lots of other people will have the same reported screen size. Not because a different screen size is reported to different sites.

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact