> Since we don't believe it's feasible to provide some mode of Chrome that can truly prevent passive fingerprinting, we will mark all related bugs and feature requests as WontFix.
I haven't read all the analyses in the links in that article, but this sounds defeatist and lazy, much unlike a stance that Chromium would take on security or performance on the web.
Contrast the above with what this article says about Firefox:
> Firefox's upcoming letterboxing feature is part of a larger project that started in 2016, called Tor Uplift.
> Part of Tor Uplift, Mozilla developers have been slowly porting privacy-hardening features developed originally for the Tor Browser and integrating them into Firefox.
If you value online privacy, your best choice is Firefox (though it requires some additional manual configuration). Safari comes second (its extensions directory could use more love). The choice where you can add more of your influence to is Firefox — by using it, evangelizing it and by donating (if feasible) to it.
Mozilla/5.0 (Linux; Android 6.0.1; SM-G928F Build/MMB29K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Mobile Safari/537.36
Versus Android Firefox's user agent:
Mozilla/5.0 (Android 9; Mobile; rv:66.0) Gecko/66.0 Firefox/66.0
Note how the Chrome browser announces your phone model and software build version to the world.
With regional models with carrier customised software builds, simply the user agent can be used to fingerprint a user.
It would help if the browser vendors, particularly Google, took a step back and spent some time thinking about why user agent strings were invented. They're more like cludges to know how to present a web page, which are less needed nowadays because you can request things like mobile device dimensions, provide HiDPI resources that are only used in case they are needed, provide entirely different views depending on mobile or web, etc. All without peeking at that ugly string. Beyond that, we have polyfills and frameworks that guarantee cross-browser compatibility and minimum supported versions, again without resorting to peeking at browser engine build numbers or worse because the detections are now largely integrated in the standards themselves.
I was working for a firm last year that made a system with a browser front end that only supported Chrome and Safari, not Edge or Firefox -- this is happening everywhere, and it is why MS threw in the towel with Edge.
I would actually think the opposite. Wouldn't it be better because then only Google would have that information? Only Google would be able to fingerprint. This is of course under the assumption (which is currently accurate) that Google has the majority share of browsers. But maybe it wouldn't be, because it would teach others how to thwart their fingerprinting.
On the subject of HTTPS - do you think the interests of ordinary consumers are in any way served by continuing on HTTP? Countless websites, even those accepting login credentials used to think that it was acceptable to not take the trouble to set up HTTPS. The only thing the operators of these websites cared about was being marked "insecure" by the most popular browser. The shift to HTTPS was a definite win for privacy for every person who uses the web.
But no. Apparently, the "biggest reason" for the people working at Google pushing this because it was good for Google. They didn't care about all the benefits for end users, they only cared about themselves.
It's sad that slander like this can become mainstream view in a forum like HN.
Disclaimer - no connection to Google in any way. Don't even own Google stock.
Isn't that the basic assumption that whole our economy rests upon?
Companies tend to do things that advance their competitive advantage.
"slander" - really? For "I suspect", followed by a theory that an online ad company might bandwagon something (otherwise good) that helps them for less than altruistic reasons?
That's why they don't need any ulterior motivation for efforts like HTTPS Everywhere, Google Fiber, or Chrome. The very fact that those technologies make the web safer, faster, or more powerful already benefits Google indirectly.
Not denying that, but this could be reversed and would still make sense: Google has long held the stance that anything that's good for Google is good for the web.
So yes, that is the biggest reason - their business objective. It just happened that it also did something good for everyone.
A corporation can set any goal the founders please. Making money need not be the dominant one. Obviously making enough to stay solvent helps with longevity.
> "When we work on making our devices accessible by the blind," he said, "I don't consider the bloody ROI." He said that the same thing about environmental issues, worker safety, and other areas where Apple is a leader.
> He didn't stop there, however, as he looked directly at the NCPPR representative and said, "If you want me to do things only for ROI reasons, you should get out of this stock."
Hasn't Amazon been notable for actively avoiding profit through much of its existence?
I mean, when I do stuff at my job, I also do things that are good for the company that pays me. That's the job, right?
The problem is that Google has positioned itself in a way where things that are good for Google might be bad for humanity as a whole.
It could simply mean the interests aligned. I get my credentials encrypted and Google gets more [insert stuff here]. When the interests don't align the result is a bit different. You could be sending tracking/location data every few minutes, adblockers could be crippled, you could be forced to sign in to sync Chrome data, etc. Usually the very same stuff Google relies on for revenue.
So what I'm saying is that it's not that outlandish to assume Google had more reasons than your benefit.
Google is smart enough where they could've enforced the penalty on sites that should use HTTPS, such as those with any kind of form submission or login. But they chose not to without much of a reason, and that's why I can believe it helped some hidden motive of theirs.
I downvoted you because this is wrong. There are at least two good reasons for it:
1. It prevents MitM attacks. For example, some ISPs and free WiFi APs inject arbitrary code into unencrypted pages.
2. It improves privacy by hiding the request headers, including hostname.
(In addition to full time data brokerage firms, there are third party phone app api’s that slurp data and feed it to advertisers. Facebook has at least one. Presumably google does too...)
All these other ways do is give people the illusion that they're safe from being tracked, when the reality is that they're tracked just the same, but by fewer people so the data is more valuable. This means that the money is centralizing around the actors with the most inexplicable methods of tracking; which are almost always the worst actors.
I hate it too, even though I'm not blameless. It's impossible to compete without a level playing field, and that playing field needs to be technically enforced, because otherwise we get region shopping and advertising / analytics models that push people to create intractable mechanisms so they can paper over how tracking fed into it.
For example, imagine a world where I'm bidding to show an ad to a visitor of nytimes.com. Now, I may not track the user, but if anyone is, they can incorporate what they know and sell that traffic back to me on a CPA model. All I see is the incoming traffic. I don't track anyone (wink, wink) but there is no difference.
In the long run this will either be solved one way or another, and all these online surveillance capitalism companies will crash and burn. Either we get a web with technical guarantees or we get a balkanized internet where every state makes their own weird laws about what is allowed or not.
Sure, VPN won't repudiate the region bullshit, and can even be outright blocked. But if adoption rose to the point websites didn't want to lose the traffic, those would diminish.
However, I do agree a non-immediate user-centric protocol is sorely needed though, especially to stop those global snooping adversaries.
That's my biggest fear. Yesterday I woke up to find out that Three (a big telco provider in the UK) had blocked Mullvad's API path as 'adult content'. I had to physically go to a store and verify that I was an adult to be able to use my VPN - the very same VPN I use because I have so little trust in the UK Gov (I'm not from here and am not staying long term, but hate that the society reminds me of 1984 regularly),
A similar dynamic is present for ISPs wherein if most people expect "Internet access" to mean something that works with VPNs, then ISPs can't block it . But that's a harder state to reach as there's much less competition between ISPs. A commerce site wouldn't want to forgo 10% of their business, but a government/quasi-government has no problem demonizing 10% of their subjects.
Which is why we ultimately do need non-real-time protocols and namespaces for the bulk of communication.
 Nagging systems like the UK would still exist, but they couldn't progress further to outright banning it.
I sincerely hope that happens without causing more harm to people. It also seems to be a long way away.
> The advertising code, which listens to window resize events, then reads the generic dimensions, sends the data to its server, and only after does Firefox remove the "gray spaces" using a smooth animation a few milliseconds later.
Would using a setTimeout() on the window resize event bypass this? Send the data 20-50ms after resize is completed giving enough time for the letterboxing stuff to go away revealing the actual dimensions, or something? They say it only blocks the dimensions during the resize event and FF removes the letterboxing "a few ms later"
> Finally, an extra zoom was applied to the viewport in fullscreen and maximized modes to use as much of the screen as possible and minimize the size of the empty margins. In that case, the window had a "letterbox" (margins at top and bottom only) or "pillbox" (margins at left and right only) appearance. window.devicePixelRatio was always spoofed to 1.0 even when device pixels != CSS pixels.
So presumably the window size is not being reset to real size - firefox just does a smart zoomin. In other words the fake size remains throughout entire session.
I wouldn't make too many assumptions. Browser vendors have overlooked seemingly "simple" things in the past .
No, it will be a setTimeout on the document load event that will poll the window size every 100ms from here till the page is evicted by a close or navigation event, increasing the detrimental effect of adtech.
If it wasn't you would still be able to reverse engineer it by sticking elements outside the viewport and seeing if they're hidden or not.
Turns out anonymity is super freaking hard. :-/
I have been using it and it makes some websites go nuts and Google CAPTCHA takes forever.
It's like reader mode, except it works on more sites.
Edit: Having a bunch of html buttons/links, showing a different to agents based on their resolution and waiting to see which ones they follow would break this, unless everyone crawls a lot of stuff they don't need. Pretending to be one of a few common sizes is probably a better solution.
Its not something I've considered before and i am genuinely curious how this would work.
Perhaps it should be a site-specific permission like the microphone or camera. Your generic news site doesn't need that functionality (and shouldn't ask for the permission - you'd know something shady was going on) but your browser-based CAD tool would and you'd grant it there.
One example might be a set of images where the smaller images wrap text more agressively to work better on a screen that's not as wide.
Firefox had better make sure its timing is not affected by such shenanigans, for instance.
Who out there browses to a size, resizes there window, then browses to another website, then resizes again and so on? That makes no sense.
This all leads to a huge variation between users of even the same screen size (e.g. 1920x1080), since the portion of the screen available to the page is different.
The Tor browser fixes this by having the window always be the same size on all machines, regardless of screen resolution. This is a bit annoying because it means you have less stuff on the page visible at a time, but since it makes you look the same as every other user, it's worth it for privacy conscious users.
Probably not a super common scenario, but not ultra rare either.
I keep a browser window open at all times. It is never full screen, because if it were full screen I wouldn't be able to see multiple windows at the same time.
I keep my browsing window as close to 1024x768 as possible. In 2019, a lot of websites can't handle a browser window using a mere 75% of the laptop screen, so they either render incorrectly or, worse, switch to a mobile view. When that happens, I either blacklist the website forever in a contemptuous fervor, or just resize the window. Apparently, this resizing action is trackable.
When I say "as close to 1024x768" as possible, I mean exactly 1024x768 unless I have resized it and forgotten. I use a little AppleScript thing to resize it to 1024x768, precisely for browser fingerprinting reasons. When you resize the window by hand, you typically end up with a VERY unique window dimension.
For example, multiply the value (e.g. window width) by some huge number, perform a slow operation in a loop that many times, and finally clear a flag. Meanwhile another thread is filling an array one by one until the flag gets cleared. The last non-tainted index in the array indicates your approximate window width.
Annoying apps which control the layout in JS instead of letting the browser do it will need it.
This determination can't be done client-side? In other words, if I resize the window, it's going to send the new size to determine where to place the elements in the "new" area?
The W3C should probably create a new, rich spec hundreds of pages long so that frontend developers may instead declare images as a unitless set of point relationships to be rendered at any resolution without digital artifacts.
For example, instead of working on the pixel level, the developer would be free to simply declare, "an arc may exist in one of these four locations." Then, merely by declaring two further "flag" values, the developer can communicate to the renderer which three arcs not to draw, except for the edge case of no arc fitting the seven previously-declared constraints.
The best solution is to use the same screen size as everyone else so you don't stand out. And that's what this does.
If you insist on letting random people run code in your document reader.
but either way if you have the JS, CSS and HTML, you should know where to put elements.
Are nyc (news yc com) people part of the problem?
There are a million ways to exfiltrate UI parameters through JS and CSS. It’s hard to both prevent that and still allow JS and responsive pages.
If you don't understand how the web works and actively dislike the community I don't understand why you keep commenting here.
That's happened twice, and it really hasn't been a big deal. I've never actually done a captcha for HN, it's just not worth it to me.
I instantly leave a website that has this aggressive reCaptcha that uses free labour to train algorithms.
It is not. 100_000s of people got/will get the same picture and marked squares correctly.
I love this idea. Recaptcha is god awful.
By this point if I see reCaptcha I just close the window unless it's something _really_ important.
Also reCaptcha isn't any more secure than any other recent captcha solution when it comes to bot protection FYI.
Every time you see a Captcha, you may have to do four or five to be considered "human" because the system struggles to determine who you are.
Is the tile fade-in also the system struggling to figure out who you are? No, it's vindictive. Punishment for not browsing the web the way google wants you to.
Here is a video of the fade-in in action: https://youtu.be/zGW7TRtcDeQ?t=89
(Note that you do not need to be on a shared IP to experience this. Merely using firefox with resist fingerprinting and a adblocker is enough to trigger this behavior on an IP google normally 'trusts' when using chrome.)
I have a static IP address, and Google will happily give me a not-dick captcha if I use another browser without anti-tracking features enabled (even with a clean profile).
Okay, I agree that's punitive. I've never seen a fade that slow, so it must be at the lowest levels of trust.
In about:config if you search for 'resistFingerprinting' there seem to be sub-settings which you can tweak to disable the timer modifications, but even after tweaking them I wasn't able to get performance to be as smooth as when resistFP was completely disabled.
Now it's used for tracking and training proprietary road image recognition for some mega corporation that had a slogan of "do no evil", which is quite hysterical really.
I need zoom to not ruin my eyes - is it just too hard to mask the true zoom?
With the letterboxing, it seems like it would mostly not do anything when using a tiling WM with fixed splits. Does that sound right?
In the bug report  it says:
> We haven't yet landed this feature in Tor Browser for at a few reasons:
> - ...
> - * Tiling window managers on Linux are hard to detect. Any implementation will need to behave appropriately for those.
So it appears they are still working on that.
This isn't just for some power users- it will increase their share among regular people whose pages will load even faster making Firefox popular.
Or are they waiting until the user share falls below 5%? Maybe they should listen to Andy grove and prepare now.
It might mean millions of FF users would suddenly struggle with captchas, but it might also mean that site creators just stop using reCaptcha and similar.
The engineering effort required to implement this feature is better spent in doing useful things.
Despite all Microsoft's posing and snuggling up to the Open Source world, they're just as bad as Google and Facebook if given the chance. (And they have been as bad in the past.)
It's truly unfortunate that browsers just punted on security, dumping endless amounts of sensitive information into a purported sandbox. Why bother developing something with a secure mindset to begin with, when you can just band-aid on patches later?! It's the sendmail/ActiveX philosophy all over again, only now with network effects.
I have and do pay money for content. For those who don't offer that option I've got uBlock Origin as ads (alongside privacy) also present a security issue.
I was in favour of DNT and made a little browser extension that would allow you to DNT some sites and not others. My hope was that eventually we'd be in a state where you could signal to the site right away whether you'd accept tracking or not and the site could paywall you if you didn't. That way it's a "pay with your data or money" loaded into the User Agent and I think I like that. It respects user choice.
When I need actual privacy, I just use Tor which supports most sites and is way more protective of my privacy than firefox. May switch to Brave in the future for this use case as they're adding Tor support but right now Chrome + Tor every once in a while works best for me.
In the futur? Tor tabs are already a feature of Brave, as of today
This is just factually incorrect.
https://www.torproject.org/projects/torbrowser/design/ (see the "HTML5 Canvas Image Extraction" section)
I would love to see a blog post about some of these features and why things are difficult.
which seems pretty safe
I'm glad to hear the Firefox also give a lot of value to privacy
I do responsive web design, and spend a considerable amount of time resizing my browser window as a "cheap" way of previewing how it would look on narrower screens. Having the resize snap to multiples of 100 or 200px would make this experience horrible. Disabling it on localhost (where you're supposedly in control of what goes in and out the browser) could be a solution.
Brilliant. All you have to do is change your window size for every site you visit.