Hacker News new | past | comments | ask | show | jobs | submit login
Am I Unique? (amiunique.org)
522 points by csomar 28 days ago | hide | past | web | favorite | 241 comments



It really bothers me that all this information is available through JS (no, please don't suggest I disable it). Why does a website need to know the list of installed fonts? Why can they enumerate the list of "videoinput" and "audioinput" devices before they've asked for video/microphone permission? Why is information about how I've set various permissions (prompt vs. allow vs. deny) available? Exposing the visibility of menu/tool/personal/status bar is just weird (though for me all of them say "Visible", which isn't the case, so I guess Firefox is lying).


Regarding the list of fonts, they don't get that info (they could with Flash, but not JS). What they do is have a list of font names in the JS source, and they set a text element to each one, comparing the resulting size of the element to a baseline (the size when the element is set to the font "sans-serif" or "serif"). If it's different, the font is installed.

Regarding the list of devices, it should be noted those IDs are site-specific, so they can't really be used for cross-site tracking, and clearing cookies also resets them (at least in Firefox). But there are discussions pointing to a consensus around disabling the enumeration before permission is granted. As in other cases, the browsers will probably have to lie to avoid breaking sites.


Firefox has an undocumented about:config preference "font.system.whitelist" to specify a list of which fonts are exposed to web content (such as the default set shipped with your OS). The preference is used by the Tor browser.

https://bugzilla.mozilla.org/show_bug.cgi?id=1121643


Opera Presto had this enabled by default.


> comparing the resulting size of the element to a baseline

This is the real hole, imo. 99% of web pages should not need that kind of information. I’m close to thinking 100% shouldn’t, although I understand its need in particularly advanced applications. Maybe there should be a sandbox specifically for these very advanced JavaScript apps - nothing else should be allowed to get font metrics.


This is equivalent to saying that nothing dynamic can depend on the position or size of elements. Which means you can't use <picture> or srcset to load different images in different cases, because that can be exploited to leak what size other elements are. You can't use media queries to affect anything that can send requests. There can't be any way to lazy-load images that are way down the page.

From an anti-fingerprinting perspective you do much better to restrict what system fonts are available than to remove all of these valuable and common reactive features.


> This is equivalent to saying that nothing dynamic can depend on the position or size of elements.

Unless that dependency is specified in a style sheet language, yes. I think that’s exactly how it should be: the web of 'apps' is fine to break this if it wants, the web of 'content' shouldn’t be concerned with extremely detailed matters of presentation.

> From an anti-fingerprinting perspective you do much better to restrict what system fonts are available

Agreed; I’m not saying there isn’t lower hanging fruit, I’m just commenting on the one that interested me.


The problem is, even if you restrict this to being specified in CSS it can still be used for fingerprinting


Is it? It seems to me getting the size of an element is a pretty basic operation that a lot of scripts use.


Maybe scripts for advanced apps, but most apps and all content shouldn’t have the slightest interest in positioning elements down to the pixel.


35% of all websites run Wordpress. A lot of those sites run pre-made templates which often heavily use jQuery scripts to show pop-ups, sliders, tooltips, what have you... Accessing the width/height of an element is pretty standard in those scripts. That's not at all limited only to advanced apps.

I'm just saying that simply blocking or limiting that part of the DOM API is probably not a practical solution.


I have JS turned off and still have plenty of other factors that make me unique so it's not just JS. My question is why is the rest of the information available / given, especially the user agent? The other headers mostly make sense. I can see the argument to turn 'do not track' off. But user agent? I've been making websites and apps for almost a quarter century and have never even read of a good use for that outside the tracking industry, let alone seen a good use of it.


I've written code several times that relies on user agent parsing to know whether it needs to work around a bug:

* Chrome on iOS started sending "webp" on it's Accept header when it didn't support inline webp yet.

* Firefox would take CSPs applied to same-origin IFrames and additionally apply them to the parent document.

* Edge wouldn't accept data URLs for IFrames that were more than 4096 characters.

In all of these cases the bug was eventually fixed, but we needed to work around the bug in the meantime. UA parsing to say "if it's Edge < v76" or whatever was the best way to do this.


Why the hell do you need data URLs for iframes?


If you want to render untrusted content, cross-origin IFrames are a great way to do it. A data: url gives you an cross origin context without a network round trip and with fewer issues than using the sandbox attribute.


I've used the user agent at work before. We were building something similar to Googles app store for a telecom, and we used it to know which file to serve to the user when they clicked download. Since we supported all kinds of phones (including old ones that used Java files for games), we couldn't just serve an apk file.


User-agents can be used for web-servers to differentiate between bots/crawlers and actual human beings. But there could be a generic user-agent instead of the ones we have now.

Note: This only matters for bots that serve to aid your website. Like search engine crawlers being given instructions what pages to ignore.


Most crawlers/bots, particularly the ones you want to be able to identify, spoof their user agent, making this mostly moot.


didn’t Chrome recently announce that they’re going to freeze the user-agent string?


Yes, but also expose the information through:

+ A new set of optional HTTP headers

+ A new JS API


How much of the internet still functions with JavaScript turned off?


The best parts imho.


Ouch... Firefox Focus includes:

chargingTime : Infinity level : 0.77

Yes that is indeed my phone's current battery level. And no matter if incognito Brave, Firefox Focus or Chrome browsers, I am unique to the site.

Even Firefox Focus which is meant to be good for privacy by throwing the entire session each time is leaky as eg with the audio context.


Surely the battery level isn't particularly useful for fingerprinting, since it's going to change fairly quickly?


But why the fuck should a website know how much charge my phone has?


It's quite clever, that a site that uses lots of processing, or has autoplay, can limit what it does: " your battery is low, do you want to continue ".

The problem is that shitheads exploit every possibility to reduce the utility of society in favour of their own profit.

This is why we can't have nice things; as the aphorism goes.


Well that is the problem with it, every shitty features like that someone can come up with something "quite clever" to do with it, in reality, all these things are being used 99% of the time for tracking.

Maybe programmers and designers should start thinking the other way around: what could be pretty clever that could be used for a feature but that CANNOT be misused for tracking


> It's quite clever, that a site that uses lots of processing, or has autoplay, can limit what it does: " your battery is low, do you want to continue ".

That sounds like a client concern, not a server concern.

I mean, as a consumer of web content I do not want my power saving profile to be accessible or actively influenced by a third party.


We are living in the best of all possible worlds. Without shitheads we would be vulnerable and wouldn't even know it.


Many places will increase prices with low battery as a despiration tactic. Its shady af and should be illegal imho.


To stop computationally intensive or security related or otherwise sensitive operations when charge level is low? Save the state / synchronize more often when charge level is low to prevent data loss?


Running untrusted code doing these things is not something I'm ok with.



I read the article you linked to, but the method it describes would surely only work for a few minutes, or until the battery level goes down?

Having said that, I do get that battery level has some potential as part of a very short-lived fingerprint, and I really don't see the need to expose this information to remote sites.


I can't speak to the rest, but the list of installed fonts is presumably so that websites can decide whether to download a font or to pick from the options already available.


In CSS you list the fonts you want, and the browser will decide if it downloads them or not. There is simply no reason JS shouldn't do the same.


It's easy to specify a font in CSS and then watch to see if the browser downloads it or not. JS isn't revealing anything which you can't already trivially get.


For a start, you have to specify it. If I have some weird font installed, you don't get to know it for free.

Anyway, if the site had to ask for the fonts, it would be viable for the browsers to impose some sane limitations on their API.


If you can load a font on the client, the loading can fail. If the load can fail you can catch the error. If you can catch the error you can store a list of fonts without being able to actually do an enumerate request. And it wouldn't be much of a browser if it couldn't POST data to an html endpoint.

As soon as you introduce a programming environment you introduce the ability to do all sorts of things that aren't builtin.

It's generally not the browser saying, here's the list of my settings, you asked for, but rather a program being executed that pokes around at all the things it can interact with to squeeze out info.

But I agree, that javascript should not have access to audio device models numbers with or without permission, nor should it have any kind of access to the browsers chrome.


The irony in using the "Do Not Track" attribute to track users is delicious.


That flag was hilarious and pointless from day 1.

So we're going to ask the profit-motivated bad-actors across the web to pretty-please exempt us from your tracking because we asked nicely?

And we need to hide this away in settings and have it default-off because otherwise how will they know we really meant it?

It's like it was designed by someone with an early 90s understanding of the web, and no comprehension whatsoever of just how awful marketing can be.

Tracking should never be opt-out in the first place, and it needs to be fought with technological and legal measures, not silly http request flags.


It achieved something: it serves as evidence that companies absolutely need to be beaten into submission with heavy-handed regulation of data collection, because they can't play nice on their own.


Honestly, I had always suspected a long term game like this was the actual intent. The EFF wants to say "SEE? They can't claim ignorance, they know people's intent and they're depraved in their indifference!". No one earnestly believed this would directly enhance privacy.

But frankly, all of this is just trying to negotiate with black hats. Any company actually acquiescing to rules for privacy will find themselves out-bid by companies who can claim they have better data by operating outside the law. These protocols are using the general public's privacy as sacrificial fuel to accomplish their impossible aims. It's a whole lot of wasted effort.

I wish the EFF would figure this out, It's impossible to law away a technological reality.


From what I understand GDPR was supposed to guarantee opt-out by default and this is widely adopted. Instead of having stupid popups on every single website asking us to agree to their tracking policies, wouldn't it make more sense to enforce a law that says they should respect the setting expressed in the header and users must not be bothered?


I think we'll probably see a wave of enforcement actions at some point that address some of this stuff.

There are still companies that hide the opt-outs in places I can't find after a lot of poking around (Oath, I'm looking at you), and a lot who will make the 'no tracking' button on their pop-up small and greyed out, and then ask you to confirm it using weird language like a link saying "Leave" that actually takes you back to what you were trying to read.


This makes me curious; has anyone tried making an extension to randomly choose for each request whether or not to send "Do Not Track"? There are probably a lot of other settings that similarly don't affect the page in any visible way that could also be randomized as well. I doubt I'm the first person to think of this!


Unless everyone was doing this, it'd be worse as you'd be in the few who have this extension (which sounds like it can trivially be tested by a webpage making a couple of requests).

Better remove the DNT header from this point of view.


I appreciate that you pointed this out, I would have missed it.

Brilliant and terrible.


I think that I am still going to keep it enabled, as a matter of principles.


Mind expanding on this? Where can I learn more about this attribute?


It was an attempt at standardizing a browser setting to opt out of tracking, but it was ultimately unsuccessful.

https://en.wikipedia.org/wiki/Do_Not_Track


Apple even removed Do Not Track option from Safari last year:

https://developer.apple.com/documentation/safari_release_not...


I'd suspected that having JS completely disabled would make my fingerprint more distinct, since presumably few people have it disabled. However, on visiting the page with JS enabled I received a 100% unique fingerprint. Any one of the following identifies me uniquely: WebGL Renderer, List of fonts (JS), Media devices. Then there is a long list of properties which can be sniffed with JavaScript that identify me with >99.9% accuracy. When I disable JS, there are 108 browsers with the exact same fingerprint as mine.

That prompts me to reconsider my policy of blocking JS. Until now, I've been blocking only third-party JS. Now I am considering blocking all JS by default.


The most identifiable data points for me are from JS, except User agent.


Same, I was one of 83 with javascript disabled, when I turned it on the information gained was way more than the information gained by knowing I have it off.


Ditto with my browser: unique with JavaScript, but a few dozen identical browsers with JavaScript disabled.

Keeping JavaScript off for now as an experiment, see how many graceful websites are still out there…


Note that some of these (ex: fonts) can be detected without JS, it just likely wasn't worth it for this site to code them that way.


Um..how’s your web browsing experience with no JS...?


Pretty decent. I enable for one of 10 i'd say, and leave a whole lot more sites that require it needlessly without shedding any tears.


Fast?


Of course it's fast if you remove any sort of functionality from the net.


Mostly unwanted functionality.


Fingerprinting needs the active effort of browser vendors to tackle. Otherwise no amount of blocking will help as it will either identify you more uniquely or cripple browsing the web to such an extent that almost no one will opt for it. Browser vendors should implement a resist fingerprint mode where browsers respond with a set of standard, agreed-upon values, or completely random ones for those that can't have a standardised value. This will probably break browsing to some extent but not as much as the 'nuclear', disable Javascript option that everyone suggests whenever this discussion comes up.

But of course most browser vendors don't have an incentive to resist fingerprinting. And if only one vendor does this then the anti-fingerprinting is likely to be less effective, as the pool of browsers is much smaller.


Chrome is making user-agent non trackable (in some time)..

https://www.zdnet.com/article/google-to-phase-out-user-agent...

I think there are actually plenty of examples where the browser vendors are doing this.

Google gets so much free tracking from analytics, ads, and OpenID, I don't think they really need to resist privacy oriented browser changes.

They might resist some privacy changes, idk, and are probably trailing safari and firefox and brave etc


Brave is easier to fingerprint than Internet Explorer


Wouldn't a user script be enough? Just hack all the APIs javascript has access to to return random or default values before the first website script is executed.


I don't think your user script will apply to IFrames? So all the fingerprinter would have to do is open an iFrame, get a fresh JS environment, and run their code there.

(Separately, your proposal would break tons of sites.)


Well, Firefox randomizes some of the fields, so of course there are unique hits. Consider Audio/Video codec hashes. Firefox provides a unique number so this website says I"m unique, but I'm unique every time someone asks! This site is kinda misleading, but still very useful.


Random (but plausible) values seem like a much better solution than just default values - you don't just hide as much identifying information as possible but you also poinsin the information that may still be there.


I use Firefox on Android with NoScript and an adblocker.

Yet I'm uniquely identifiable. One culprit is screen size putting me at <0.01%.

Does that make defeating fingerprinting on mobile hopeless for the casual user?

Edit: more info. All JS is blocked, and I have privacy.resistFingerprinting. The page doesn't detect my adblocker. Still, there are just too many things I can't change:

- hardware concurrency: 1.7%

- audio formats: 0.2%

- navigator properties: 0.2%

- audio data: 0.1%

I was surprised at this one:

- Media devices: Unique

What are media device identifiers for, exactly? Why does the browser supply it without JS?


If I disable Javascript (or don't enable it) I see everywhere 'NA' (except for HTTP headers) and no similarity rates are shown. Are you sure your Javascript is disabled?


What is the best way to disable javascript on firefox? Is there an addon that can whitelist certain websites to not disable javascript everywhere, while turning it of by default?

EDIT: I just discovered ublock origin can disable js by default. Now I wish I could change the user agent...


Great idea! Targeted advertising is really upsetting. If we take it one step further:

1. Find out the top non-unique fingerprint [of the year.]

2. Create a Firefox add-on (or modify Firefox source if an add-on is not powerful enough), which uses the most non-unique fingerprint [this year.]

3. Targeted advertising ends for those who use the plug-in/add-on.

It's a cat and mouse game as fingerprint parameters keep increasing, but I think it's possible to win this one.


The problem with this is the less unique you are, the more you start looking like a bot. The more you look like a bot the more times you have to do things to prove you aren't, like captchas


This is Google's strategy to defeat privacy in a nutshell


A better solution would be to scramble parts of this fingerprint for each domain.


True.

But I only care about filling captchas on say banking websites, which is approximately happens once a month.

The rest, I just close the tab: the content never worth it.

The most obnoxious ads don't care about captchas, they just blast you with ads from your previous search keywords/browsing history.

p.s. Would the amount of captchas reduce if we take not the top most non-unique fingerprint, but say "slightly below average"? It will be still severely non-unique for ads purposes.


If everyone looks like a bot, then nobody's a bot, and they can go fuck off.


a small price to pay for anonymity


> It's a cat and mouse game as fingerprint parameters keep increasing, but I think it's possible to win this one.

I don't think it's possible to win this one without dramatic changes to the web (e.g. abandon JS). Panopticlick was started 10 years ago, fingerprinting was pretty well known about back then, and though browser vendors have started adding defenses, it has evidently done nothing as the amount of data you can gather via scripts keeps only increasing and blocking some vectors would break some sites.

The situation was dire 10 years ago, it hasn't gotten better, it's not getting better. If anything it's now just worse because of so many idiots writing sites that don't really work at all with JS disabled.

The most practical approach I can think of that you could employ right now is to move browsers to run on headless servers (instead of the user's computer) and let them stream the rendered page to your client. It's still got plenty of issues, and javascript is making it hard to do it right and have a good UX. Fuck javascript and everyone who uses it without degrading gracefully.


My fonts alone makes me pretty unique it seems (<0.01%). 180 or so fonts that the browser is happy to share with the world. Does seem a little unnecessary.


My list of fonts includes OpenDyslexic and OpenDyslexicMono and puts me at < 0.01% as well.


I can't remember which but either Firefox or Chrome only send a standard list of fonts.


Safari does that.

Firefox has an about:config preference that would let you set up a font whitelist yourself, but it doesn’t have a standard set.


Firefox's preference is "font.system.whitelist". You can specify a list of which fonts are exposed to web content (such as the default set shipped with your OS). The preference is used by the Tor browser.

https://bugzilla.mozilla.org/show_bug.cgi?id=1121643

Does Safari actually do something similar? On my Mac, amiunique.org reports 344 fonts in Safari, 331 in Firefox (not using "font.system.whitelist"), and 309 in Chrome.


Doesn't seem to be the case. I tried both browsers and got the same result, i.e. 180+ fonts sent (<0.01% match)


I am unique through the useragent. Apparently Brave and Chrome both put the device model of your phone into the useragent string. It also contains your OS version and Chromium version. I guess those three points alone are able to very significantly narrow you down.


I am really skeptical about the UA uniqueness fraction. I am on the latest version of Chrome on Windows and it claims that 0.17% of the people visiting the site have the same UA string. Mine is:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36

for what it's worth. Can it really be that only two in a thousand people visiting the site in the last seven days have the latest version of Chrome? Or does Chrome just update so often that very few people tend to have the same version at any one time. (If the latter, that suggests that maybe it's not a very strong tracking signal.) It seems more likely to me that the data is just stale but I guess I don't know much about the true distribution of Chrome useragents.


Same for me, it says <0.1%

I’m using the current iPad mini, which is 10 months old, with the current os version. I highly doubt I’m that unique.


Chrome published an intent to freeze the user-agent string:

https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...


And replace it by Client Hints. How does that prevent fingerprinting?


In a nutshell, to my best understanding;

User-Agent is divulged publicly with all requests to all sites, so no protections exist around using it, because it is freely offered up by the user without requiring consent.

Client Hints have to be specifically requested by the remote website, which demonstrates intent to collect data, and thus falls under data collection laws.


> User-Agent is divulged publicly with all requests to all sites, so no protections exist around using it, because it is freely offered up by the user without requiring consent.

This seems like an argument you'd lose if your website failed to respond to requests that were missing the User-Agent.


Perhaps, but I know of no case law precedence one way or the other with respect to User-Agent.


Forcing fingerprinters to switch from passive methods to active methods means you can apply a privacy budget: https://github.com/bslassey/privacy-budget

Then sites that need some information can get it, but won't be allowed to get so much identifying information that you're unique.

(Disclosure: I work for Google)


Im unsure why all that data is needed when i simply visit a website to read stuff or watch stuff on youtube. Is it really necessary?


Nope. Not at all. There is a browser[1] that tries to hide this stuff for you (it's the FF-derived TOR browser w/o the TOR bit). That should be a good start.

1: https://www.whonix.org/wiki/SecBrowser


Apparently I'm the only person in the world using linux-5.4.14-zen with a Radeon VII. Which would be kinda cool if that information wasn't being broadcast to every site I visit.


And I'm the only person using the materialistic app (HN reader) on Android 5.1 on a 1st gen Moto X. That I can believe...


Hello from a Materialistic on Pixel user!


It's not that you're special for running Linux, it's just that it's very easy to be unique!

I'm running a stock Windows 10. Microsoft Edge with Ublock Origin as only browser extension.

Software installed is Visual Studio, Visual Studio Code, NVidia CUDA development System, Erlang Dev System, and Microsoft Office. That's it.

You'd think there would at least be "dozens" of us. Nope:

> Your full fingerprint is unique among the 1572109 collected so far.

User Agent is < .01%

> Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4023.0 Safari/537.36 Edg/81.0.396.0

and my fonts are < .01%

> Agency FB, Aharoni, Algerian, Arial, Arial Black and 171 others

Other < .01% are "WebGL Parameters", "Connection" and "Navigator Properties"


I'm on an iPhone 11 Pro Max and I'm unique, apparently due to my language and time zone.


    user_pref("webgl.enable-debug-renderer-info", false);
for Firefox.


The problem is that even if you can disable this one, it's enabled by default and browserv vendors keep adding this shit without any concern for the privacy impact.


What does it mean when i load the test, and it says i'm unique, and then i go back five minutes later without changing anything and it says i'm still unique?

are they tracking that i've run the test before, verifying that i'm me, and telling me i'm still unique, or is my fingerprint differing between two subsequent tests?

also, i'm suspicious about some of these values - only 0.13% of people have a querty keyboard layout? Only 7% of tests have no gyroscope? That doesn't sound right.


It says they set a cookie for 4 months.


Open a private tab and the cookie is not there.


Your username leads me to believe you're browsing from an IP address within the GDPR. Accept cookies and try again.

'AmIUniqueId', expires Mon, 25 May 2020 12:28:29 GMT


The number which seemed strangest to me is that my up-to-date pixel 3a phone has a user agent shared with only 0.01% of browsers.


Up to date Chrome Pixel 4 here, same. I'm really the only Pixel 4 that ran this?


Since users of evergreen browsers will have frequently changing version number strings, the history of snapshots of these fingerprints is time sensitive. This serves to artificially inflate the uniqueness, as an identical device using the same evergreen browser 1 month, or 2 week ago, will not match you now, but would match if they revistited with their now updated browser.

To be useful, fingerprinting techniques need to somehow be robust against always increasing version numbers of evergreen browsers in ways that this website is not.


that doesn't sound too far out of whack to me, considering the user-agent string changes every time your OS version and your browser version increment.


Why would the most recent version Google makes available to my Pixel 3a phone be different from the most recent version Google makes available to anyone else?


It wouldn't be, they're almost certainly doing something wrong on the back end. I sent identical requests on multiple IPs spaced out over some time, and each one of them returned as being "unique". Even though the e.g. UA values were in fact the same. I suspect some kind of aggressive dedupe of things that look too similar, or some such.


I was surprised that being in the US Pacific time zone was only shared by 2.25% of their data set. Assuming users are evenly distributed across time zones, it'd be 4.2%; but of course they aren't evenly distributed, and I'd expect the west coast of the US to have a disproportionately large number of internet users as well as people who would visit a website like this.

(Though, to be fair, they go by UTC offset, so it's only UTC-8 for a few months of the year due to DST. That probably affects the number; maybe the percentage is higher for UTC-7.)


There’s a lot more Asians/TZ than Americans/TZ


Sites like this and the EFF’s panopticlick err on the side of saying you can be tracked when that might not be true.

For example: I visited this same site a while ago with the same device, and both times it has said I was unique. A new version of iOS came out, so my user agent changed. Unless sites are also storing unique data on your machine (through cookies or localstorage), browser fingerprinting is a crapshoot. This goes double for mobile devices.


Another unusual circumstance is when you're the first users who have just installed the latest software update, a fingerprint testing website will identify you as an unique user. But you won't be unique anymore within a few days.

It happens a lot in the Tor mailing list. Often, a new major release of Tor Browser comes out, a user is shocked by the upgrade, "OMG! I'm unique and trackable!". Don't panic, just keep using it.


Of course most sites that try to track you are also storing cookies or local storage. It only takes one to tie your two unique fingerprints together... of course your IP address might suffice. Or account/email address, if you're logged into a site that shares data with third parties and use the same address.

I'm sure someone's also devised a way to guess how to bind two fingerprints together when OS or browser updates but many other parameters (timezone, ip, language, screen size, fonts, etc) remain similar enough.


All the fingerprinting tools I've seen so far do not include JA3 signatures, which in my opinion make for an interesting bit of information - they introduce few bits of entropy since they depend on the TLS implementation, but for the same reason they can't be easily spoofed.

Plugging in an article and demo I wrote some time ago: https://jwlss.pw/ja3/


I was going to mention the lack of TCP and TLS fingerprinting too - I wonder if those are actually used by rogue advertisers?

Also, I guess TLS fingerprints would change over time, with browser upgrades, although I'd expect changes to be relatively infrequent.


> WebGL Renderer

Wait, any website knows what sort of graphics card I have?! Or how many CPUs? That's just ridiculous!

No wonder folks can write and target exploits so perfectly nowadays! If it's an integrated graphics, you have the processor family, easy. Wow.

Why would any random site need to know this sorts of information when all I'm trying to do is read text and view images?


> when all I'm trying to do is read text and view images?

Often, the "images" on the web are increasingly becoming 3D graphics dynamically generated and rendered by OpenGL on-the-fly, perhaps real-time ray-tracing is coming soon... You can see a lot of applications of in-browser 3D graphics on data visualization websites, for example. And naturally, the program needs these information to render graphics, but it can be repurposed as a tracking tool.

I disable WebGL manually, and use it only when it's necessary. There are a few plugins that allow you to manage it as well, for example, NoScript, but it's overkill to use it just for this single task.

I think the solution should be an opt-in option. WebGL shouldn't be activated unless the user gives the permission to a specific, first-party website.


Thanks for the idea, I just toggled the "webgl.disabled" switch in about:config; we'll see what difference it makes; I would only assume that my browsing will become faster now and will crash less frequently.

Is there a list of these useless things that any sane user should disable? I already have had fonts disabled with "gfx.downloadable_fonts.enabled" toggled away from the default; any other useless attack vectors with buffer overflows could likewise be disabled?


Yeah, I always disable WebGL.

I was really freaked to discover that all Debian family VMs on a given VirtualBox host have the same WebGL fingerprint. Same for all Windows VMs, all Red Hat family VMs, etc.

But then you can't see all the cool WebGL stuff that gets posted here.


> I think the solution should be an opt-in option. WebGL shouldn't be activated unless the user gives the permission.

More permission dialogs just train users to click "yes" on every permission dialog, so it really isn't a solution (remember Windows Vista?). I'd prefer it if there were some way to make browsers respond to WebGL queries in a similar predictable way to mitigate fingerprinting.


Using a "not enabled until clicked" approach is better - The popup won't be showed unless the user clicks the otherwise missing video - but if the Flash object is a tracker, most users won't even notice it, and the tracker is blocked.

It's not perfect, but works reasonably well and has successfully protected a large number of users from Adobe Flash trackers.


I switched to 30d because with browser update cycles "all time" is kinda useless, imho.

> Content language 31.51% en-US,en;q=0.5

This is a default Firefox. I didn't change anything. Surely there's probably only Chinese that has more users of this language?

I'm not even an American or in the US... (all time it was just below 30%, not teal-colored, but already orange..)

Also why is the JS result for Content language 35.40%?

Whereas my list of plugins is 46.2% (higher value seems better) - but I'm absolutely sure there's at least one uncommon one among them...

Hardware concurrency 16 is really interesting, 3.32%, probably all fellow Ryzen users.

TLDR: Surely there are some of those properties where it absolutely makes sense to avoid the uniqueness, but others seem a bit like bullshit metrics.

But on the other hand, my user agent (OS+Browser) = 7.37% is a really good sign for me. I find this a very, very high percentage. (Ok, it's Win10+Firefox, but still...)


Agreed, some of those stats are questionable.

A basic Microsoft Edge installation with en-GB content language header has a similarity ratio of 0.18% -- really?

Java enabled: true -- 0.26%

Java enabled: false (on Brave) -- 8.75%


They make sense.

en-GB would start below 5%, having en-GB on edge means someone would have to be using edge (under 1%) and not on the mobile for a mobile first site.

Keeping a current java version enabled is difficult. Each browser update, each java update disables java by default now.

Could be a small Brave sample size.


I fired up Chrome (I'm a Firefox user too), and for content-language it showed "en-US,en;q=0.9" (FF is q=0.5), so I guess each language is broken down into a few buckets.



What terrible browser design to send all that data. I would never have imagined my computer was sending all that off 10 years ago


It's doesn't "send the data" so much as "a general purpose UI platform cannot work unless a program knows the configuration of the UI". The idea that your complex computing environment could have an arbitrary complex conversation with an app, and not expose its identity, while perhaps desirable, it impractical.

You can browser behind a generic user-agent firewall, but you'll get a severely degraded experience that treats an Apple watch the same as a desktop workstation.


I didn't sign up for "a general purpose UI platform". I got on board when it was a hypertext publication system. They've been boiling this frog for 25 years.

Every year, I'm less and less convinced that JavaScript is desirable to have in a web browser.


Standardization on only a few design variations might fix this? People would complain about monocultures, though.


It's not just about design. For instance there is Canvas and WebGL for interactive content. Different GPU's draw things slightly differently. Someone creates a canvas in the background, draws stuff, reads it back and since your machine has a particular way of drawing things (because of GPU design, drivers, different browser implementations of the painting stack) you can be tracked. Same goes for audio capabilities. If you don't want multimedia, then fine, it is slightly easier. Then the server can probably try to track you with your upload / download speed, ping etc. They can tax your cpu to see how fast it is. Then they get your browser width / height. Combine them and you can be tracked pretty accurately. There are countless avenues for fingerprinting.


Well, sure, but at the hardware level, more standardization is possible too. If you have a popular model of computer and it's the same as everyone else's computer, it's hard to get much out of identifying it. This is kind of what Apple does by offering a limited number of models.

Another approach would be to use standardized VPN's that hide client machine differences.

There are downsides, of course. I'm not sure people care about fingerprinting enough to do all that.


Using the user agent string for this is not needed and is not common now for new projects.


Indeed, I didn't write "user agent string"; I wrote "user agent".


I've heard of a similar phenomenon where hackers probing a system can fingerprint different software stacks based on what they get as responses to different "undefined behavior" inputs. Anything that communicates with the outside world is intrinsically giving up some information about itself.


It was worse when flash and java were in common use.


Is there something actionable I can take based on the results of this page? E.g. is there some Firefox add-on that obfuscates attributes used for fingerprinting?


Something here doesn't add up. I'm unique even on my standard Samsung S10e with up-to-date Chrome or the stock browser. My screen height has a similarly ratio of 0.11%. My user agent is <0.01%.

Perhaps their data is under-representing mobiles?

I can see how fingerprinting would work on a Windows PC with its vast number of possible combinations of cpu/gpu/installed system fonts/add ons etc. But how would this works on mobile?


Yea, I’ve gone there using a stock iphone and a stock iPad both using Safari and neither one with custom fonts, plugins, etc. iPhone: only 48 browsers have exactly this fingerprint IPad: only 14 browsers have this fingerprint.

My list of fonts is only 0.55% similar And user agent is only 0.34% similar

Those are hard to believe on a device where these things really don’t get customized.


Is it just me or would people rather be able to spoof or disable these fields without drudging through thousands of lines of C++ code?


Most of these fields you don't want to disable or spoof because either it breaks your web experience (imagine images randomly not loading because you spoofed your headers and the server thinks your browser supports .whatever or because you emptied the list and the server doesn't know what to send) or because it makes you unique (having your build id be "" is certainly more unique than whatever others actually use).

The tests you can spoof like the canvas fingerprinting where it doesn't break the canvas and the results are already so spread out being unique isn't an identifier itself are already built into a lot of browsers. The site doesn't really acknowledge this though, it just says "you're unique" without checking if it's a different unique value each time in which case it doesn't identify you at all.


For some, no it would not really be possible.

The Canvas fingerprint is the biggest one, as it relies on different forms of hardware acceleration which is based on GPU+CPU+configs for your computer and browser.


Can't be that hard, considering Firefox allows for blocking Canvas fingerprinting in its settings, and the site indeed shows that my canvas data is shared with around 6% of the users, so definitely not unique.


What I don't understand that the user agent for the latest version of Firefox (on Windows 10) has a similarity rating of just 0,31%.


That's across "all time", not just the last 7 days.

But for the last 7 days, Firefox has over 300%, and Windows over 285%. Gecko is over 700%. Screen left of 0 is 250%. -- I'm guessing some kind of calculation error?


I doubt the accuracy of this fingerprint.

It claims my user agent (the latest stable Google Chrome - default browser - on a stock Umidigi android device) is unique.

Considering there are probably 5 million people worldwide with my setup, I am surprised it's unique.


Keep in mind that those 5 million people did not submit their fingerprint on this website, so they are not "known" by this tool.


TOR browser on highest security level, javascript disabled and windowed returns: Almost! (You can most certainly be tracked.)


I've just tried the same, but the HTTP header user agent and the javascript one are different. Maybe TOR varies some attributes?


I tried it myself in Tor Browser, and I think this conclusion "You can most certainly be tracked" is highly misleading. Without JavaScript, the only information collected by the website is the headers sent by the browser, listed below...

> User agent: Mozilla/5.0 (Windows NT 10.0; rv:68.0) Gecko/20100101 Firefox/68.0 (all time: 2.19% / 30 days: 10.80%)

> Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 (all time: 52.15% / 30 days: 37.04%)

> Content encoding: gzip, deflate, br (all time: 66.35% / 30 days: 92.02%)

> Upgrade Insecure Requests: 1 (all time: 27.33% / 30 days: 85.86%)

> Referer: https://amiunique.org/ (all time: 16.55% / 30 days: 60.97%)

> JavaScript disabled (all time: 4.70% / 30 days: 14.09%)

Surely, there are ways to track users without JavaScript, but none of them is inspected by the website. The website only inspects the headers, and from the listed information above, the tracker doesn't learn anything more than "This user is running the latest release of Tor Browser with JavaScript disabled", and at least 100,000 people are doing it right now.

But because the sample size of this website is limited, it only sees the fact that these attributes seem to be a very small percentage of the overall traffic, while ignoring the fact that these headers are generic, it makes the conclusion.

> "You can most certainly be tracked"

Which is misleading. A Tor Browser with JavaScript disabled is still one of the most difficult browsers to track.

The website actually tells you that,

> But only 1440 browsers out of the 1560340 observed browsers (<0.01 %) have exactly the same fingerprint as yours.

If you are the only person using the latest version of Tor Browser to access this website with JavaScript disabled, yes, you can be tracked as a "Tor user without JavaScript", and if you enter personal information, your identity can be crosstracked between websites (if you are still the only Tor user with JavaScript disabled). Otherwise, not much.

Interestingly, when a new Tor Browser release comes out, there will always be an user to upgrade the browser before everyone else, then goes to a fingerprint testing website, tests the browser, and says "OMG! I'm unique and trackable!". In this case, don't panic, just keep using it, you won't be unique anymore within a week.


Worth noting that by default, the Tor Browser has JavaScript enabled (only disabled on non-HTTPS sites).


Is this dataset correct?

The accelerometer, gyroscope and proximity sensor has a ~9% false ratio here which I intepret as 9% laptops/desktops and 91% phones. That's a bit too many phones. But use of Adblock is over 34%. Also, I see 15.36% Chrome PDF Plugin and 23.81% Native Client both which only exist on PCs and again doesn't really mesh with only 9% non-phone. What's going on...? Are there laptops with all three things listed...?

Also, everything else puts Chrome market share above 60% these days, only 38% here.


> The accelerometer, gyroscope and proximity sensor has a ~9% false ratio here which I intepret as 9% laptops/desktops and 91% phones.

Some of the stats are clearly broken. For the 7 day duration, those stats are closer to 95% false. For the 90 days duration, I see values like

  Platform       153.00% Linux x86_64
  Screen height  253.53% 1080
  Media devices  159.95% Timeout
> Also, I see 15.36% Chrome PDF Plugin and 23.81% Native Client

I don't see PDF reader stats at all.

> Also, everything else puts Chrome market share above 60% these days, only 38% here.

Easily explained by bias if it's even correct. The kinds of people who would try this service are also the kinds of people who would use a niche browser.


What is most surprising to me:

Apparently my content language (JavaScript) alone is already unique (en-US,en,de,zh-CN,zh). Being a native German speaker who speaks American English (I live in the US) and Chinese and actually has those languages enabled in their operating system.

Without JavaScript enabled the HTTP header for Content language is <0.01% -- (en-US,en;q=0.9,de;q=0.8,zh-CN;q=0.7,zh;q=0.6)

Is there any way to have a language installed in the operating system without leaking that information to the browser?


I've changed my language to only "en-US,en;q=0.9" and it show as 8.05%

You can do that in Chrome from Settings > Languages, remove what you don't need, but i guess you will lose auto-correct for those languages


There is (I'm assuming you mean to the web, not to the browser).

On Firefox: Options -> Language -> Choose your preferred language for displaying pages


Yeah I get the same from: en-NZ,zh-CN,zh,en-GB,en-US,en


Honest question here about the risk of being “unique”...:

If my unique identifier changes with each browser version + os version + timezone + build id + fontpack + screen size... am I really trackable through this vector? Those things change all the time for me at least. Seems like a very short term method of tracking at best. Why should we be afraid of these things? I had the same question about panopticlick (or whatever EFF called it).


Fingerprinting has lots of flaws like you mentioned.

Too many people have similar devices for it to be the primary way to track people.

It is really a backup method of tracking that works some of the time based on a probabilistic match.

Your IP address is also a big factor in finding a match.


It accurately identifies my model of iPhone and that I’m running an iOS beta, but it doesn’t appear to know anything that isn’t “I am an iOS developer”. That’s probably still something that can be tracked, but it boils down entirely to User-Agent which will vary weekly in lockstep with thousands of others.

I am hesitant to extend the result that I can be uniquely tracked to category “iOS developers” beyond that.


Having a billion people using the exact same model of a computer (iPhone), with almost identical hardware, software, peripherals, and system configuration, is surely an advantage in terms of anti-tracking.

I recommend you to try visiting the same page on your computer, the slightest variation of your hardware or software configuration will pinpoint your identity to 99% certainty: What is the position tolerance of your canvas? What is the list of fonts or browser plugins you have installed? What is the latency of your audio interface?


I use an Apple computer, too, and I only use the system fonts, don’t use adblockers in any browser, and don’t use plugins either.

(Yes, I know how tracking works. It’s ironic to watch people realize that all their protest actions are tracking beacons. I wish it was more widely understood that being obvious is a consequence of being obviously different.)


This includes battery level which I think is pretty weird.


Agreed, that was the only thing which stood out to me as a surprise. Why would any website need to know my battery level, without an explicit permission? Seems like a gross oversight.


Well that, and it makes you appear more unique than probably desired (to the site).


Browser fingerprints being so unique cuts both ways.

So if I hit this site from the VM that one of my evil twins uses, its Firefox fingerprint will also be ~unique.

And I can have lots of VMs. Maybe not all running at once, but still. And yeah, they use different exit VPNs.


I have multiple attributes that are each unique: * Content language (HTTP header) * Content language (JS) * List of fonts (JS) * Media devices

A bit surprised that it doesn't even need to do any fancy combinations of identifiers to get a uid in my case.


Remember that you are unique in the list of visitors to this website. That is a limited group. You are probably unique anyway, but the numbers here are not acurate.


Same for me. Content languages are all unique. Same for fonts and media devices.

Really wonder how I can hide that information from my browser...


The results on https://panopticlick.eff.org seem much more accurate. Probably since Panopticlick by EFF is more well-known and therefore has better data.


They also recommend to install Ublock botnet (not Ublock Origin) and to enable DNT (a well-known way to track), I wouldn't trust those guys


Alright, so how do I fix the unique ones like Canvas?

Edit: To block canvas fingerprinting I set `privacy.resistFingerprinting` to true in Firefox's about:config. Now I am trying to figure out how to disable the font list and Media devices, etc.


I am looking into this too. Looks like it's `font.system.whitelist` but I'm not sure of a "generic, common" list to add there.


>ANGLE (NVIDIA GeForce GTX 1070 Direct3D11 vs_5_0 ps_5_0)

Wow I didn't know it reveals the GPU


The Media Devices query might always return "unique" on some browsers (e.g. Firefox). Firefox, for example, returns a random string which persists for that origin across a single browsing session (but which is different after the browser is restarted, for example).

It's therefore like a cookie, but weaker in that it gets cleared on browser restart (and, in private browsing, it's always changing). So it's not very useful for tracking, since a site could just use a cookie instead.

If that's your only Unique attribute, then your browser might be less unique than the website claims.


What does it mean that I have a unique user agent? I just use normal brave browser.


I hope it is a joke (pretty good one if it is), but on an off-chance it is not: it is entirely plausible that you are the first Brave browser user to visit the site (or at least Brave of that version).


Not a joke, panopticlick by eff does this too and explains how it works.


Are you sure you're looking only the "User Agent"? The text at the top is describing your entire "browser fingerprint." It includes everything the server could gather from you. Including cookies, browser version, width/height of the window, etc.

In case it's not clear, uniqueness should be seen as bad in this case. It means you can be tracked.


I'm also getting unique useragent with Brave.


I'm using Brave and my useragent is not unique.

Version 1.2.43 Chromium: 79.0.3945.130 (Official Build) (64-bit)


No idea what it means, but I am also curious. I literally just installed Brave on IOS before checking it out, so I tried it on IOS Chrome and the fingerprint was still unique.

I think it might just mean that we leak waaaay more data than most of us realize...


Most likely that it isn't very common. Mine said <0.01% for the Brave user-agent.


Firefox developer edition / windows ranks at 0.02% :)


"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:72.0) Gecko/20100101 Firefox/72.0" (mainline Firefox) is apparently 0.02% as well.


My user agent also says 0.16% similarity and I’m just using safari on an iphone. How can that be?


There's more than one iPhone, more than one IOS version and more than one version of safari. Multiply the odds of someone having your exact setup AND browsing this site.


I get just 0,31% similarity rating with the latest Firefox.


No one has the same fonts I do, apparently. You could also probably infer my occupation from my preponderance of monospaced fonts.

I'm surprised to find that simply by having Ubuntu that I tweaked to my liking and a laptop, you can uniquely identify me by how much screen space I have left.

Can anyone give me an explanation of how the media devices thing works, though? Is video in and audio in given some random hash which websites can then use as the random hash which identifies me?


It's a bit strange that they recommend using the Tor Browser, but the Tor Browser is shown as being "unique" using their fingerprinting setup. Presumably this is just a lack of data rather than a real claim that browsers like the Tor Browser are actually not sufficiently resisting fingerprinting (after all, apparently 55% of people they've got data for use Linux and 41% use Firefox!).


FWIW, my Tor Browser also shows up as unique. Just the fonts alone are <0.01%.


How do I configure Chrome spell checking dictionaries without letting it inform the websites about the languages I use? The set of languages makes me unique.


Would I be right in saying that if you use iOS safari (which a decent number of mobile users use) you are not very unique at all and any tracking based on browser fingerprinting is pretty useless? (Or is it the combination of a non unique browser fingerprint with a slightly more targeted origin IP address from the ISP) that makes it almost worth/possible trying to identify a person without cookies?


I was quite surprised that on my iPhone 8 Plus with iOS 13 I was totally unique apparently. How exactly could that be possible if this is a device where I cannot install any plugins, change fonts, or really do anything to make it more or less different to another iPhone?


I’m surprised at how low the percentages are on my iPhone. It says only ~35 other people have had the same fingerprint as me.

I’m running safari, private mode, plus a single content blocker app.


Read this, did it, felt vaguely appalled.

But: seems to me that if you do something unusual (turn off JS, use weird and wonderful browser, hide browser stuff from The Internet) then you make yourself even "more unique". Turning JS off still doesn't really help - and more to the point makes the modern web basically unusable.

I ended up with: meh, nothing anyone can do, why worry?


> Turning JS off still doesn't really help - and more to the point makes the modern web basically unusable.

I have the opposite experience. Turning JS off makes the modern web much more usable!

Regarding "uniqueness", if you browse from a fixed IP address then you are unique anyway and your browser fingerprint does not really matter.


Do you have a good example of a reasonably high profile site that is unusable without js? One option (I know this sounds like a lot of work!) would be to disable js and whitelist only those sites you trust not to track you, that require js to be usable.


The site tells me that 41% of users share my web browser, that is Firefox. That seems insanely high given the general market share of FF. Any idea why it's so popular on this website? Of course I would expect privacy conscious people to be more likely to use FF but 40+% seems unbelievably huge especially when you factor mobile browsers and the like.


Probably because must of that website users worry about their privacy and use Firefox instead of Chrome But Chromium show as 0.66% share and Chrome show 38.28% usage share


Funny, this website has been posted over a dozen times on HN since 2014, and only recently has it broken above 10 points to 512!

Look at this link:

https://news.ycombinator.com/from?site=amiunique.org


"(<0.01 %) have exactly the same fingerprint as yours."

If Apple cares about privacy, they need to get on this in mobile Safari ASAP.


That made me want to see whether the calculations have some assumptions about variables being independent: it seems unlikely that the default browser on an OS shipping in 7+ volumes has various headers, fonts, etc. which are so identifying. I could believe some hardware variation with canvas/WebGL but the iOS font list wasn’t even customizable until 13.


Browsers could defend against this (especially in combo with central malicious site protection) : no legitimate page should be using many of these features, and certainly not the combination of features.

Also turning JavaScript off helps (although that becomes a signal too, just like DNT).


All time : However, your full fingerprint is unique among the 1556046 collected so far.

Android - default Chrome.

---

Fyi, this whole tracking thing isn't that hard to circomemvent. You only need one parameter to change to generate a unique hash.

With all the parameters they include, this should be done fairly easy.


my user-agent (in particular my Linux distribution) was making me unique. To be "less unique" i went to about:config and created these preferences:

    general.useragent.override: "user_agent"
    general.useragent.overridepreference: "user_agent"
replace user_agent the UA of your choice. This made me less trackable according to https://panopticlick.eff.org and https://amiunique.org


Qwerty keyboard layout yields 0.7% uniqueness? Do other browsers just not declare this?


AZERTY gave me 0.03%, even though France and Belgium together have >1% of the world population, and definitely more than one percent of all internet users.


In my Firefox it says "Not supported"


Not supported is 3.96%

So anything that's not QWERTY nor "unsupported" makes up the remaining ~95%?


I am unique! It looks like the main culprit is that I'm running Firefox Developer Edition so my reported version is a tiny percent of all users and combined with Linux, Firefox, UTC-5 and en, that makes me one in 1561147.


This misses innerHeight, availHeight etc. which are also highly idiosyncratic due to different settings of the dock/task bar size, installed browser plug-ins, opened bookmark side windows and bars etc.


However, your full fingerprint is unique among the 1545859 collected so far.


Reminds me of evercookie. Previously, in HN ... [1] (10 years ago!)

1: https://news.ycombinator.com/item?id=1714446


Why not intercept the different system calls and randomize their outputs?


Hang on, how is this unique:

    Keyboard layout 0.09% Qwerty
Is this saying less than 1% of users use a Querty keyboard? I must be living under a rock if that's the case.


My fingerprint is unique apparently. Some water got in my main phone, so now I'm using an older Android 7.0 phone with Firefox. Just that fact got me to 0.02% of all users.


Over the last seven days, 109,5 percent of all users have been using my operating system, and 129,86 percent of users have been using my web browser? I think not.


This is great. I knew fingerprinting in this fashion was possible, but seeing the actual figures attached to some of these headers and attributes is a real eye-opener.


Though I'm not surprised by the results, the one that sticks out most to me is the content language:

> Content language <0.01% en-US,en,bg,es

Bruh... 0.01%?!!? Seriously?!?!?


I’m en-gb alone and that’s only 0.69%. By comparison, 0.01% for the highly specific combination of languages you have set doesn’t seem that strange.


Interestingly this led me to notice Firefox limits hardware concurrency to a max of 16, changeable in about:config via the key dom.maxHardwareConcurrency


I am pretty sure I am going to be unique browsing on a version of chrome which is 65 versions old so I am not even going to jump into that pool.


How much of this requires JS to detect and then send to the server? as opposed to can be processed entirely server-side?


Given the number of ways of fingerprinting, how can we reduce our uniqueness? Is it even possible not to be unique?


Only 4% of the users who visited this test page use Ubuntu. Linux on the desktop is really not a thing.


Where does it say your distro?

For me, it just detects that it's Linux (all time: 8.15%), while Platform is Linux x86_64 (all time: 10.93%). Kinda funny that there are more x86_64 Linux users than Linux users. Makes me wonder whether Ubuntu users are considered Linux users at all..


Doesn't seem so bad. It's probably double that number including other distributions. Desktop users are also diluted by all those mobile users.


> 30 days : But only 40 browsers out of the 10774 observed browsers (<0.01 %) have exactly the same fingerprint as yours.

Updated iPhone Safari. En locale. I have hard time believing it's that unique.


The above site beachballed and was nonfunctional for me.

A better site with similar functioning is: https://panopticlick.eff.org/


panopticlick said my chrome browser was not unique.

this tool said it is.


Panopticlick is interested in solving a problem (and considerable progress has been made on that), this site is as it's name suggests here to sell you on a belief.

It'll cheerfully tell an unlimited number of identical browsers that they're "unique" because that's the message it is here to sell.

If it gave these supposedly "Unique" browsers an actually unique identifier they'd be able to compare it and see that they're not so "unique" as claimed or worse that the same browser gets different "unique" identifiers and so they aren't identifiers at all. So that's why it doesn't do that.


How can the percentage be greater than 100%?


How to protect?


Good news! I am unique. My experience is valid. I really needed to hear this.




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: