It really bothers me that all this information is available through JS (no, please don't suggest I disable it). Why does a website need to know the list of installed fonts? Why can they enumerate the list of "videoinput" and "audioinput" devices before they've asked for video/microphone permission? Why is information about how I've set various permissions (prompt vs. allow vs. deny) available? Exposing the visibility of menu/tool/personal/status bar is just weird (though for me all of them say "Visible", which isn't the case, so I guess Firefox is lying).
Regarding the list of fonts, they don't get that info (they could with Flash, but not JS). What they do is have a list of font names in the JS source, and they set a text element to each one, comparing the resulting size of the element to a baseline (the size when the element is set to the font "sans-serif" or "serif"). If it's different, the font is installed.
Regarding the list of devices, it should be noted those IDs are site-specific, so they can't really be used for cross-site tracking, and clearing cookies also resets them (at least in Firefox). But there are discussions pointing to a consensus around disabling the enumeration before permission is granted. As in other cases, the browsers will probably have to lie to avoid breaking sites.
Firefox has an undocumented about:config preference "font.system.whitelist" to specify a list of which fonts are exposed to web content (such as the default set shipped with your OS). The preference is used by the Tor browser.
> comparing the resulting size of the element to a baseline
This is the real hole, imo. 99% of web pages should not need that kind of information. I’m close to thinking 100% shouldn’t, although I understand its need in particularly advanced applications. Maybe there should be a sandbox specifically for these very advanced JavaScript apps - nothing else should be allowed to get font metrics.
This is equivalent to saying that nothing dynamic can depend on the position or size of elements. Which means you can't use <picture> or srcset to load different images in different cases, because that can be exploited to leak what size other elements are. You can't use media queries to affect anything that can send requests. There can't be any way to lazy-load images that are way down the page.
From an anti-fingerprinting perspective you do much better to restrict what system fonts are available than to remove all of these valuable and common reactive features.
> This is equivalent to saying that nothing dynamic can depend on the position or size of elements.
Unless that dependency is specified in a style sheet language, yes. I think that’s exactly how it should be: the web of 'apps' is fine to break this if it wants, the web of 'content' shouldn’t be concerned with extremely detailed matters of presentation.
> From an anti-fingerprinting perspective you do much better to restrict what system fonts are available
Agreed; I’m not saying there isn’t lower hanging fruit, I’m just commenting on the one that interested me.
35% of all websites run Wordpress. A lot of those sites run pre-made templates which often heavily use jQuery scripts to show pop-ups, sliders, tooltips, what have you... Accessing the width/height of an element is pretty standard in those scripts. That's not at all limited only to advanced apps.
I'm just saying that simply blocking or limiting that part of the DOM API is probably not a practical solution.
I have JS turned off and still have plenty of other factors that make me unique so it's not just JS. My question is why is the rest of the information available / given, especially the user agent? The other headers mostly make sense. I can see the argument to turn 'do not track' off. But user agent? I've been making websites and apps for almost a quarter century and have never even read of a good use for that outside the tracking industry, let alone seen a good use of it.
I've written code several times that relies on user agent parsing to know whether it needs to work around a bug:
* Chrome on iOS started sending "webp" on it's Accept header when it didn't support inline webp yet.
* Firefox would take CSPs applied to same-origin IFrames and additionally apply them to the parent document.
* Edge wouldn't accept data URLs for IFrames that were more than 4096 characters.
In all of these cases the bug was eventually fixed, but we needed to work around the bug in the meantime. UA parsing to say "if it's Edge < v76" or whatever was the best way to do this.
If you want to render untrusted content, cross-origin IFrames are a great way to do it. A data: url gives you an cross origin context without a network round trip and with fewer issues than using the sandbox attribute.
I've used the user agent at work before. We were building something similar to Googles app store for a telecom, and we used it to know which file to serve to the user when they clicked download. Since we supported all kinds of phones (including old ones that used Java files for games), we couldn't just serve an apk file.
User-agents can be used for web-servers to differentiate between bots/crawlers and actual human beings. But there could be a generic user-agent instead of the ones we have now.
Note: This only matters for bots that serve to aid your website. Like search engine crawlers being given instructions what pages to ignore.
It's quite clever, that a site that uses lots of processing, or has autoplay, can limit what it does: " your battery is low, do you want to continue ".
The problem is that shitheads exploit every possibility to reduce the utility of society in favour of their own profit.
This is why we can't have nice things; as the aphorism goes.
Well that is the problem with it, every shitty features like that someone can come up with something "quite clever" to do with it, in reality, all these things are being used 99% of the time for tracking.
Maybe programmers and designers should start thinking the other way around: what could be pretty clever that could be used for a feature but that CANNOT be misused for tracking
> It's quite clever, that a site that uses lots of processing, or has autoplay, can limit what it does: " your battery is low, do you want to continue ".
That sounds like a client concern, not a server concern.
I mean, as a consumer of web content I do not want my power saving profile to be accessible or actively influenced by a third party.
To stop computationally intensive or security related or otherwise sensitive operations when charge level is low? Save the state / synchronize more often when charge level is low to prevent data loss?
I read the article you linked to, but the method it describes would surely only work for a few minutes, or until the battery level goes down?
Having said that, I do get that battery level has some potential as part of a very short-lived fingerprint, and I really don't see the need to expose this information to remote sites.
I can't speak to the rest, but the list of installed fonts is presumably so that websites can decide whether to download a font or to pick from the options already available.
It's easy to specify a font in CSS and then watch to see if the browser downloads it or not. JS isn't revealing anything which you can't already trivially get.
If you can load a font on the client, the loading can fail. If the load can fail you can catch the error. If you can catch the error you can store a list of fonts without being able to actually do an enumerate request. And it wouldn't be much of a browser if it couldn't POST data to an html endpoint.
As soon as you introduce a programming environment you introduce the ability to do all sorts of things that aren't builtin.
It's generally not the browser saying, here's the list of my settings, you asked for, but rather a program being executed that pokes around at all the things it can interact with to squeeze out info.
But I agree, that javascript should not have access to audio device models numbers with or without permission, nor should it have any kind of access to the browsers chrome.
It achieved something: it serves as evidence that companies absolutely need to be beaten into submission with heavy-handed regulation of data collection, because they can't play nice on their own.
Honestly, I had always suspected a long term game like this was the actual intent. The EFF wants to say "SEE? They can't claim ignorance, they know people's intent and they're depraved in their indifference!". No one earnestly believed this would directly enhance privacy.
But frankly, all of this is just trying to negotiate with black hats. Any company actually acquiescing to rules for privacy will find themselves out-bid by companies who can claim they have better data by operating outside the law. These protocols are using the general public's privacy as sacrificial fuel to accomplish their impossible aims. It's a whole lot of wasted effort.
I wish the EFF would figure this out, It's impossible to law away a technological reality.
From what I understand GDPR was supposed to guarantee opt-out by default and this is widely adopted. Instead of having stupid popups on every single website asking us to agree to their tracking policies, wouldn't it make more sense to enforce a law that says they should respect the setting expressed in the header and users must not be bothered?
I think we'll probably see a wave of enforcement actions at some point that address some of this stuff.
There are still companies that hide the opt-outs in places I can't find after a lot of poking around (Oath, I'm looking at you), and a lot who will make the 'no tracking' button on their pop-up small and greyed out, and then ask you to confirm it using weird language like a link saying "Leave" that actually takes you back to what you were trying to read.
This makes me curious; has anyone tried making an extension to randomly choose for each request whether or not to send "Do Not Track"? There are probably a lot of other settings that similarly don't affect the page in any visible way that could also be randomized as well. I doubt I'm the first person to think of this!
Unless everyone was doing this, it'd be worse as you'd be in the few who have this extension (which sounds like it can trivially be tested by a webpage making a couple of requests).
Better remove the DNT header from this point of view.
I'd suspected that having JS completely disabled would make my fingerprint more distinct, since presumably few people have it disabled. However, on visiting the page with JS enabled I received a 100% unique fingerprint. Any one of the following identifies me uniquely: WebGL Renderer, List of fonts (JS), Media devices. Then there is a long list of properties which can be sniffed with JavaScript that identify me with >99.9% accuracy. When I disable JS, there are 108 browsers with the exact same fingerprint as mine.
That prompts me to reconsider my policy of blocking JS. Until now, I've been blocking only third-party JS. Now I am considering blocking all JS by default.
Same, I was one of 83 with javascript disabled, when I turned it on the information gained was way more than the information gained by knowing I have it off.
Fingerprinting needs the active effort of browser vendors to tackle. Otherwise no amount of blocking will help as it will either identify you more uniquely or cripple browsing the web to such an extent that almost no one will opt for it. Browser vendors should implement a resist fingerprint mode where browsers respond with a set of standard, agreed-upon values, or completely random ones for those that can't have a standardised value. This will probably break browsing to some extent but not as much as the 'nuclear', disable Javascript option that everyone suggests whenever this discussion comes up.
But of course most browser vendors don't have an incentive to resist fingerprinting. And if only one vendor does this then the anti-fingerprinting is likely to be less effective, as the pool of browsers is much smaller.
Wouldn't a user script be enough? Just hack all the APIs javascript has access to to return random or default values before the first website script is executed.
I don't think your user script will apply to IFrames? So all the fingerprinter would have to do is open an iFrame, get a fresh JS environment, and run their code there.
(Separately, your proposal would break tons of sites.)
Well, Firefox randomizes some of the fields, so of course there are unique hits. Consider Audio/Video codec hashes. Firefox provides a unique number so this website says I"m unique, but I'm unique every time someone asks! This site is kinda misleading, but still very useful.
Random (but plausible) values seem like a much better solution than just default values - you don't just hide as much identifying information as possible but you also poinsin the information that may still be there.
I use Firefox on Android with NoScript and an adblocker.
Yet I'm uniquely identifiable. One culprit is screen size putting me at <0.01%.
Does that make defeating fingerprinting on mobile hopeless for the casual user?
Edit: more info. All JS is blocked, and I have privacy.resistFingerprinting. The page doesn't detect my adblocker. Still, there are just too many things I can't change:
- hardware concurrency: 1.7%
- audio formats: 0.2%
- navigator properties: 0.2%
- audio data: 0.1%
I was surprised at this one:
- Media devices: Unique
What are media device identifiers for, exactly? Why does the browser supply it without JS?
If I disable Javascript (or don't enable it) I see everywhere 'NA' (except for HTTP headers) and no similarity rates are shown. Are you sure your Javascript is disabled?
What is the best way to disable javascript on firefox? Is there an addon that can whitelist certain websites to not disable javascript everywhere, while turning it of by default?
EDIT: I just discovered ublock origin can disable js by default. Now I wish I could change the user agent...
The problem with this is the less unique you are, the more you start looking like a bot. The more you look like a bot the more times you have to do things to prove you aren't, like captchas
But I only care about filling captchas on say banking websites, which is approximately happens once a month.
The rest, I just close the tab: the content never worth it.
The most obnoxious ads don't care about captchas, they just blast you with ads from your previous search keywords/browsing history.
p.s. Would the amount of captchas reduce if we take not the top most non-unique fingerprint, but say "slightly below average"? It will be still severely non-unique for ads purposes.
> It's a cat and mouse game as fingerprint parameters keep increasing, but I think it's possible to win this one.
I don't think it's possible to win this one without dramatic changes to the web (e.g. abandon JS). Panopticlick was started 10 years ago, fingerprinting was pretty well known about back then, and though browser vendors have started adding defenses, it has evidently done nothing as the amount of data you can gather via scripts keeps only increasing and blocking some vectors would break some sites.
The situation was dire 10 years ago, it hasn't gotten better, it's not getting better. If anything it's now just worse because of so many idiots writing sites that don't really work at all with JS disabled.
The most practical approach I can think of that you could employ right now is to move browsers to run on headless servers (instead of the user's computer) and let them stream the rendered page to your client. It's still got plenty of issues, and javascript is making it hard to do it right and have a good UX. Fuck javascript and everyone who uses it without degrading gracefully.
My fonts alone makes me pretty unique it seems (<0.01%). 180 or so fonts that the browser is happy to share with the world. Does seem a little unnecessary.
Firefox's preference is "font.system.whitelist". You can specify a list of which fonts are exposed to web content (such as the default set shipped with your OS). The preference is used by the Tor browser.
Does Safari actually do something similar? On my Mac, amiunique.org reports 344 fonts in Safari, 331 in Firefox (not using "font.system.whitelist"), and 309 in Chrome.
I am unique through the useragent. Apparently Brave and Chrome both put the device model of your phone into the useragent string. It also contains your OS version and Chromium version. I guess those three points alone are able to very significantly narrow you down.
I am really skeptical about the UA uniqueness fraction. I am on the latest version of Chrome on Windows and it claims that 0.17% of the people visiting the site have the same UA string. Mine is:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36
for what it's worth. Can it really be that only two in a thousand people visiting the site in the last seven days have the latest version of Chrome? Or does Chrome just update so often that very few people tend to have the same version at any one time. (If the latter, that suggests that maybe it's not a very strong tracking signal.) It seems more likely to me that the data is just stale but I guess I don't know much about the true distribution of Chrome useragents.
User-Agent is divulged publicly with all requests to all sites, so no protections exist around using it, because it is freely offered up by the user without requiring consent.
Client Hints have to be specifically requested by the remote website, which demonstrates intent to collect data, and thus falls under data collection laws.
> User-Agent is divulged publicly with all requests to all sites, so no protections exist around using it, because it is freely offered up by the user without requiring consent.
This seems like an argument you'd lose if your website failed to respond to requests that were missing the User-Agent.
Nope. Not at all. There is a browser[1] that tries to hide this stuff for you (it's the FF-derived TOR browser w/o the TOR bit). That should be a good start.
Apparently I'm the only person in the world using linux-5.4.14-zen with a Radeon VII. Which would be kinda cool if that information wasn't being broadcast to every site I visit.
The problem is that even if you can disable this one, it's enabled by default and browserv vendors keep adding this shit without any concern for the privacy impact.
What does it mean when i load the test, and it says i'm unique, and then i go back five minutes later without changing anything and it says i'm still unique?
are they tracking that i've run the test before, verifying that i'm me, and telling me i'm still unique, or is my fingerprint differing between two subsequent tests?
also, i'm suspicious about some of these values - only 0.13% of people have a querty keyboard layout? Only 7% of tests have no gyroscope? That doesn't sound right.
Since users of evergreen browsers will have frequently changing version number strings, the history of snapshots of these fingerprints is time sensitive. This serves to artificially inflate the uniqueness, as an identical device using the same evergreen browser 1 month, or 2 week ago, will not match you now, but would match if they revistited with their now updated browser.
To be useful, fingerprinting techniques need to somehow be robust against always increasing version numbers of evergreen browsers in ways that this website is not.
that doesn't sound too far out of whack to me, considering the user-agent string changes every time your OS version and your browser version increment.
Why would the most recent version Google makes available to my Pixel 3a phone be different from the most recent version Google makes available to anyone else?
It wouldn't be, they're almost certainly doing something wrong on the back end. I sent identical requests on multiple IPs spaced out over some time, and each one of them returned as being "unique". Even though the e.g. UA values were in fact the same. I suspect some kind of aggressive dedupe of things that look too similar, or some such.
I was surprised that being in the US Pacific time zone was only shared by 2.25% of their data set. Assuming users are evenly distributed across time zones, it'd be 4.2%; but of course they aren't evenly distributed, and I'd expect the west coast of the US to have a disproportionately large number of internet users as well as people who would visit a website like this.
(Though, to be fair, they go by UTC offset, so it's only UTC-8 for a few months of the year due to DST. That probably affects the number; maybe the percentage is higher for UTC-7.)
Sites like this and the EFF’s panopticlick err on the side of saying you can be tracked when that might not be true.
For example: I visited this same site a while ago with the same device, and both times it has said I was unique. A new version of iOS came out, so my user agent changed. Unless sites are also storing unique data on your machine (through cookies or localstorage), browser fingerprinting is a crapshoot. This goes double for mobile devices.
Another unusual circumstance is when you're the first users who have just installed the latest software update, a fingerprint testing website will identify you as an unique user. But you won't be unique anymore within a few days.
It happens a lot in the Tor mailing list. Often, a new major release of Tor Browser comes out, a user is shocked by the upgrade, "OMG! I'm unique and trackable!". Don't panic, just keep using it.
Of course most sites that try to track you are also storing cookies or local storage. It only takes one to tie your two unique fingerprints together... of course your IP address might suffice. Or account/email address, if you're logged into a site that shares data with third parties and use the same address.
I'm sure someone's also devised a way to guess how to bind two fingerprints together when OS or browser updates but many other parameters (timezone, ip, language, screen size, fonts, etc) remain similar enough.
All the fingerprinting tools I've seen so far do not include JA3 signatures, which in my opinion make for an interesting bit of information - they introduce few bits of entropy since they depend on the TLS implementation, but for the same reason they can't be easily spoofed.
> when all I'm trying to do is read text and view images?
Often, the "images" on the web are increasingly becoming 3D graphics dynamically generated and rendered by OpenGL on-the-fly, perhaps real-time ray-tracing is coming soon... You can see a lot of applications of in-browser 3D graphics on data visualization websites, for example. And naturally, the program needs these information to render graphics, but it can be repurposed as a tracking tool.
I disable WebGL manually, and use it only when it's necessary. There are a few plugins that allow you to manage it as well, for example, NoScript, but it's overkill to use it just for this single task.
I think the solution should be an opt-in option. WebGL shouldn't be activated unless the user gives the permission to a specific, first-party website.
Thanks for the idea, I just toggled the "webgl.disabled" switch in about:config; we'll see what difference it makes; I would only assume that my browsing will become faster now and will crash less frequently.
Is there a list of these useless things that any sane user should disable? I already have had fonts disabled with "gfx.downloadable_fonts.enabled" toggled away from the default; any other useless attack vectors with buffer overflows could likewise be disabled?
I was really freaked to discover that all Debian family VMs on a given VirtualBox host have the same WebGL fingerprint. Same for all Windows VMs, all Red Hat family VMs, etc.
But then you can't see all the cool WebGL stuff that gets posted here.
> I think the solution should be an opt-in option. WebGL shouldn't be activated unless the user gives the permission.
More permission dialogs just train users to click "yes" on every permission dialog, so it really isn't a solution (remember Windows Vista?). I'd prefer it if there were some way to make browsers respond to WebGL queries in a similar predictable way to mitigate fingerprinting.
Using a "not enabled until clicked" approach is better - The popup won't be showed unless the user clicks the otherwise missing video - but if the Flash object is a tracker, most users won't even notice it, and the tracker is blocked.
It's not perfect, but works reasonably well and has successfully protected a large number of users from Adobe Flash trackers.
I switched to 30d because with browser update cycles "all time" is kinda useless, imho.
> Content language 31.51% en-US,en;q=0.5
This is a default Firefox. I didn't change anything. Surely there's probably only Chinese that has more users of this language?
I'm not even an American or in the US... (all time it was just below 30%, not teal-colored, but already orange..)
Also why is the JS result for Content language 35.40%?
Whereas my list of plugins is 46.2% (higher value seems better) - but I'm absolutely sure there's at least one uncommon one among them...
Hardware concurrency 16 is really interesting, 3.32%, probably all fellow Ryzen users.
TLDR: Surely there are some of those properties where it absolutely makes sense to avoid the uniqueness, but others seem a bit like bullshit metrics.
But on the other hand, my user agent (OS+Browser) = 7.37% is a really good sign for me. I find this a very, very high percentage. (Ok, it's Win10+Firefox, but still...)
I fired up Chrome (I'm a Firefox user too), and for content-language it showed "en-US,en;q=0.9" (FF is q=0.5), so I guess each language is broken down into a few buckets.
It's doesn't "send the data" so much as "a general purpose UI platform cannot work unless a program knows the configuration of the UI". The idea that your complex computing environment could have an arbitrary complex conversation with an app, and not expose its identity, while perhaps desirable, it impractical.
You can browser behind a generic user-agent firewall, but you'll get a severely degraded experience that treats an Apple watch the same as a desktop workstation.
I didn't sign up for "a general purpose UI platform". I got on board when it was a hypertext publication system. They've been boiling this frog for 25 years.
Every year, I'm less and less convinced that JavaScript is desirable to have in a web browser.
It's not just about design. For instance there is Canvas and WebGL for interactive content. Different GPU's draw things slightly differently. Someone creates a canvas in the background, draws stuff, reads it back and since your machine has a particular way of drawing things (because of GPU design, drivers, different browser implementations of the painting stack) you can be tracked. Same goes for audio capabilities. If you don't want multimedia, then fine, it is slightly easier. Then the server can probably try to track you with your upload / download speed, ping etc. They can tax your cpu to see how fast it is. Then they get your browser width / height. Combine them and you can be tracked pretty accurately. There are countless avenues for fingerprinting.
Well, sure, but at the hardware level, more standardization is possible too. If you have a popular model of computer and it's the same as everyone else's computer, it's hard to get much out of identifying it. This is kind of what Apple does by offering a limited number of models.
Another approach would be to use standardized VPN's that hide client machine differences.
There are downsides, of course. I'm not sure people care about fingerprinting enough to do all that.
I've heard of a similar phenomenon where hackers probing a system can fingerprint different software stacks based on what they get as responses to different "undefined behavior" inputs. Anything that communicates with the outside world is intrinsically giving up some information about itself.
Is there something actionable I can take based on the results of this page? E.g. is there some Firefox add-on that obfuscates attributes used for fingerprinting?
Something here doesn't add up.
I'm unique even on my standard Samsung S10e with up-to-date Chrome or the stock browser.
My screen height has a similarly ratio of 0.11%. My user agent is <0.01%.
Perhaps their data is under-representing mobiles?
I can see how fingerprinting would work on a Windows PC with its vast number of possible combinations of cpu/gpu/installed system fonts/add ons etc. But how would this works on mobile?
Yea, I’ve gone there using a stock iphone and a stock iPad both using Safari and neither one with custom fonts, plugins, etc.
iPhone: only 48 browsers have exactly this fingerprint
IPad: only 14 browsers have this fingerprint.
My list of fonts is only 0.55% similar
And user agent is only 0.34% similar
Those are hard to believe on a device where these things really don’t get customized.
Most of these fields you don't want to disable or spoof because either it breaks your web experience (imagine images randomly not loading because you spoofed your headers and the server thinks your browser supports .whatever or because you emptied the list and the server doesn't know what to send) or because it makes you unique (having your build id be "" is certainly more unique than whatever others actually use).
The tests you can spoof like the canvas fingerprinting where it doesn't break the canvas and the results are already so spread out being unique isn't an identifier itself are already built into a lot of browsers. The site doesn't really acknowledge this though, it just says "you're unique" without checking if it's a different unique value each time in which case it doesn't identify you at all.
The Canvas fingerprint is the biggest one, as it relies on different forms of hardware acceleration which is based on GPU+CPU+configs for your computer and browser.
Can't be that hard, considering Firefox allows for blocking Canvas fingerprinting in its settings, and the site indeed shows that my canvas data is shared with around 6% of the users, so definitely not unique.
I tried it myself in Tor Browser, and I think this conclusion "You can most certainly be tracked" is highly misleading. Without JavaScript, the only information collected by the website is the headers sent by the browser, listed below...
> User agent: Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0 (all time: 2.19% / 30 days: 10.80%)
Surely, there are ways to track users without JavaScript, but none of them is inspected by the website. The website only inspects the headers, and from the listed information above, the tracker doesn't learn anything more than "This user is running the latest release of Tor Browser with JavaScript disabled", and at least 100,000 people are doing it right now.
But because the sample size of this website is limited, it only sees the fact that these attributes seem to be a very small percentage of the overall traffic, while ignoring the fact that these headers are generic, it makes the conclusion.
> "You can most certainly be tracked"
Which is misleading. A Tor Browser with JavaScript disabled is still one of the most difficult browsers to track.
The website actually tells you that,
> But only 1440 browsers out of the 1560340 observed browsers (<0.01 %) have exactly the same fingerprint as yours.
If you are the only person using the latest version of Tor Browser to access this website with JavaScript disabled, yes, you can be tracked as a "Tor user without JavaScript", and if you enter personal information, your identity can be crosstracked between websites (if you are still the only Tor user with JavaScript disabled). Otherwise, not much.
Interestingly, when a new Tor Browser release comes out, there will always be an user to upgrade the browser before everyone else, then goes to a fingerprint testing website, tests the browser, and says "OMG! I'm unique and trackable!". In this case, don't panic, just keep using it, you won't be unique anymore within a week.
The accelerometer, gyroscope and proximity sensor has a ~9% false ratio here which I intepret as 9% laptops/desktops and 91% phones. That's a bit too many phones. But use of Adblock is over 34%. Also, I see 15.36% Chrome PDF Plugin and 23.81% Native Client both which only exist on PCs and again doesn't really mesh with only 9% non-phone. What's going on...? Are there laptops with all three things listed...?
Also, everything else puts Chrome market share above 60% these days, only 38% here.
> The accelerometer, gyroscope and proximity sensor has a ~9% false ratio here which I intepret as 9% laptops/desktops and 91% phones.
Some of the stats are clearly broken. For the 7 day duration, those stats are closer to 95% false. For the 90 days duration, I see values like
Platform 153.00% Linux x86_64
Screen height 253.53% 1080
Media devices 159.95% Timeout
> Also, I see 15.36% Chrome PDF Plugin and 23.81% Native Client
I don't see PDF reader stats at all.
> Also, everything else puts Chrome market share above 60% these days, only 38% here.
Easily explained by bias if it's even correct. The kinds of people who would try this service are also the kinds of people who would use a niche browser.
Apparently my content language (JavaScript) alone is already unique (en-US,en,de,zh-CN,zh). Being a native German speaker who speaks American English (I live in the US) and Chinese and actually has those languages enabled in their operating system.
Without JavaScript enabled the HTTP header for Content language is <0.01% -- (en-US,en;q=0.9,de;q=0.8,zh-CN;q=0.7,zh;q=0.6)
Is there any way to have a language installed in the operating system without leaking that information to the browser?
Honest question here about the risk of being “unique”...:
If my unique identifier changes with each browser version + os version + timezone + build id + fontpack + screen size... am I really trackable through this vector? Those things change all the time for me at least. Seems like a very short term method of tracking at best. Why should we be afraid of these things? I had the same question about panopticlick (or whatever EFF called it).
It accurately identifies my model of iPhone and that I’m running an iOS beta, but it doesn’t appear to know anything that isn’t “I am an iOS developer”. That’s probably still something that can be tracked, but it boils down entirely to User-Agent which will vary weekly in lockstep with thousands of others.
I am hesitant to extend the result that I can be uniquely tracked to category “iOS developers” beyond that.
Having a billion people using the exact same model of a computer (iPhone), with almost identical hardware, software, peripherals, and system configuration, is surely an advantage in terms of anti-tracking.
I recommend you to try visiting the same page on your computer, the slightest variation of your hardware or software configuration will pinpoint your identity to 99% certainty: What is the position tolerance of your canvas? What is the list of fonts or browser plugins you have installed? What is the latency of your audio interface?
I use an Apple computer, too, and I only use the system fonts, don’t use adblockers in any browser, and don’t use plugins either.
(Yes, I know how tracking works. It’s ironic to watch people realize that all their protest actions are tracking beacons. I wish it was more widely understood that being obvious is a consequence of being obviously different.)
Agreed, that was the only thing which stood out to me as a surprise. Why would any website need to know my battery level, without an explicit permission? Seems like a gross oversight.
Remember that you are unique in the list of visitors to this website. That is a limited group. You are probably unique anyway, but the numbers here are not acurate.
Alright, so how do I fix the unique ones like Canvas?
Edit: To block canvas fingerprinting I set `privacy.resistFingerprinting` to true in Firefox's about:config. Now I am trying to figure out how to disable the font list and Media devices, etc.
The results on https://panopticlick.eff.org seem much more accurate. Probably since Panopticlick by EFF is more well-known and therefore has better data.
I hope it is a joke (pretty good one if it is), but on an off-chance it is not: it is entirely plausible that you are the first Brave browser user to visit the site (or at least Brave of that version).
Are you sure you're looking only the "User Agent"? The text at the top is describing your entire "browser fingerprint." It includes everything the server could gather from you. Including cookies, browser version, width/height of the window, etc.
In case it's not clear, uniqueness should be seen as bad in this case. It means you can be tracked.
No idea what it means, but I am also curious. I literally just installed Brave on IOS before checking it out, so I tried it on IOS Chrome and the fingerprint was still unique.
I think it might just mean that we leak waaaay more data than most of us realize...
There's more than one iPhone, more than one IOS version and more than one version of safari. Multiply the odds of someone having your exact setup AND browsing this site.
The Media Devices query might always return "unique" on some browsers (e.g. Firefox). Firefox, for example, returns a random string which persists for that origin across a single browsing session (but which is different after the browser is restarted, for example).
It's therefore like a cookie, but weaker in that it gets cleared on browser restart (and, in private browsing, it's always changing). So it's not very useful for tracking, since a site could just use a cookie instead.
If that's your only Unique attribute, then your browser might be less unique than the website claims.
No one has the same fonts I do, apparently. You could also probably infer my occupation from my preponderance of monospaced fonts.
I'm surprised to find that simply by having Ubuntu that I tweaked to my liking and a laptop, you can uniquely identify me by how much screen space I have left.
Can anyone give me an explanation of how the media devices thing works, though? Is video in and audio in given some random hash which websites can then use as the random hash which identifies me?
How do I configure Chrome spell checking dictionaries without letting it inform the websites about the languages I use? The set of languages makes me unique.
It's a bit strange that they recommend using the Tor Browser, but the Tor Browser is shown as being "unique" using their fingerprinting setup. Presumably this is just a lack of data rather than a real claim that browsers like the Tor Browser are actually not sufficiently resisting fingerprinting (after all, apparently 55% of people they've got data for use Linux and 41% use Firefox!).
Would I be right in saying that if you use iOS safari (which a decent number of mobile users use) you are not very unique at all and any tracking based on browser fingerprinting is pretty useless? (Or is it the combination of a non unique browser fingerprint with a slightly more targeted origin IP address from the ISP) that makes it almost worth/possible trying to identify a person without cookies?
I was quite surprised that on my iPhone 8 Plus with iOS 13 I was totally unique apparently. How exactly could that be possible if this is a device where I cannot install any plugins, change fonts, or really do anything to make it more or less different to another iPhone?
But: seems to me that if you do something unusual (turn off JS, use weird and wonderful browser, hide browser stuff from The Internet) then you make yourself even "more unique". Turning JS off still doesn't really help - and more to the point makes the modern web basically unusable.
I ended up with: meh, nothing anyone can do, why worry?
Do you have a good example of a reasonably high profile site that is unusable without js? One option (I know this sounds like a lot of work!) would be to disable js and whitelist only those sites you trust not to track you, that require js to be usable.
The site tells me that 41% of users share my web browser, that is Firefox. That seems insanely high given the general market share of FF. Any idea why it's so popular on this website? Of course I would expect privacy conscious people to be more likely to use FF but 40+% seems unbelievably huge especially when you factor mobile browsers and the like.
Probably because must of that website users worry about their privacy and use Firefox instead of Chrome
But Chromium show as 0.66% share and Chrome show 38.28% usage share
That made me want to see whether the calculations have some assumptions about variables being independent: it seems unlikely that the default browser on an OS shipping in 7+ volumes has various headers, fonts, etc. which are so identifying. I could believe some hardware variation with canvas/WebGL but the iOS font list wasn’t even customizable until 13.
Browsers could defend against this (especially in combo with central malicious site protection) : no legitimate page should be using many of these features, and certainly not the combination of features.
Also turning JavaScript off helps (although that becomes a signal too, just like DNT).
AZERTY gave me 0.03%, even though France and Belgium together have >1% of the world population, and definitely more than one percent of all internet users.
I am unique! It looks like the main culprit is that I'm running Firefox Developer Edition so my reported version is a tiny percent of all users and combined with Linux, Firefox, UTC-5 and en, that makes me one in 1561147.
This misses innerHeight, availHeight etc. which are also highly idiosyncratic due to different settings of the dock/task bar size, installed browser plug-ins, opened bookmark side windows and bars etc.
My fingerprint is unique apparently. Some water got in my main phone, so now I'm using an older Android 7.0 phone with Firefox. Just that fact got me to 0.02% of all users.
This is great. I knew fingerprinting in this fashion was possible, but seeing the actual figures attached to some of these headers and attributes is a real eye-opener.
Over the last seven days, 109,5 percent of all users have been using my operating system, and 129,86 percent of users have been using my web browser? I think not.
Interestingly this led me to notice Firefox limits hardware concurrency to a max of 16, changeable in about:config via the key dom.maxHardwareConcurrency
For me, it just detects that it's Linux (all time: 8.15%), while Platform is Linux x86_64 (all time: 10.93%). Kinda funny that there are more x86_64 Linux users than Linux users. Makes me wonder whether Ubuntu users are considered Linux users at all..
That's across "all time", not just the last 7 days.
But for the last 7 days, Firefox has over 300%, and Windows over 285%. Gecko is over 700%. Screen left of 0 is 250%. -- I'm guessing some kind of calculation error?
Panopticlick is interested in solving a problem (and considerable progress has been made on that), this site is as it's name suggests here to sell you on a belief.
It'll cheerfully tell an unlimited number of identical browsers that they're "unique" because that's the message it is here to sell.
If it gave these supposedly "Unique" browsers an actually unique identifier they'd be able to compare it and see that they're not so "unique" as claimed or worse that the same browser gets different "unique" identifiers and so they aren't identifiers at all. So that's why it doesn't do that.