I worked in the ad industry. Every web-browser including brave, tor,safari is uniquely identifiable even on same hardware.
All the public computer researchers and browser vendors are years behind the techniques to fingerprint devices (probably 5+).
Canvas, WebGl etc are techniques of the past. There are much more advanced ones, than can identify devices with completely uniquely (on both desktop and mobile)
Also we know when users fake their fingerprints, and the algorithm respects the decision even though we know who the user is despite faking with all state of the art methods.
Latest methods dont even use JavaScript. Just CSS is enough to identify every device uniquely but you'd need JS to send the data back.
Every public researcher I've seen are given honeypot techniques that they consider state of the art even thought the industry is way ahead of the researchers.
Not to be picky, because to be honest I completely believe that what you say is plausible, but that's a lot of outrageous claims with very little in the way of examples or evidence.
In order to believe what you're claiming here, we have to believe that
1. There is magic css/js that can not only tell different browsers and devices apart, but can tell two phones from the same manufacturing run with the same software apart.
2. Despite the fact that this magic code would have to run in the client browser where its content, execution, and the data it sends back are all plainly visible to anyone who can hit ctrl-shift-j, no "public researcher or browser vendor" knows anything about it.
3. This technology is not used to combat ad fraud because of some weird conspiracy at Google.
It could be true, I suppose, but I don't see why anyone would believe this based on the evidence so far.
Large companies have fixed ad spend budgets. If they dont spend they lose. Doesn't matter if its lost to fraud.
Google, Facebook advertiser have more specific budgets, especially Facebook which has a large number of small advertisers.
Large adnetworks however get large advertisers with advertising budgets in tens of millions on average.
There is a larger pressure on Google and Facebook to be competitive then say smaller ad networks.
Also the techniques that I mention of are not in use in wild after the POC on millions of devices few months ago. They're a backup plan. Also not every adtech company has them. Maybe 2 or 3 at the most are at the cutting edge of tracking.
Google knows this very well. You really think Google couldn't counter fraud? Its a fake and manufactured outrage. No one wants to give away their "secret sauce" just yet.
Also right now many adtech companies are going bankrupt. Many larger one are waiting for smaller ones to be gone after which the landscape changes.
> Also right now many adtech companies are going bankrupt.
As someone currently working in IT for an adtech company, I can at least provide one data point in that the company I work for is slowly (but surely) dying.
Just today I decided to switch to FF and try noscript experience. Works good enough so far. Funny that crippled experience is even better in some weird ways. I used to scroll reddit forums, now I can read just few first posts and that is good. I used to expand a lot of comments, now I can't expand them, but it saves time. Sure, self-control would be better, but that is good too :) It's good to know that without JavaScript I'll send less data to that anti-human industry.
Wait for the new CSS version over which our team had a watch. Wont require JS after it comes out. ;D
Also we know many exploits to bypass noscript if we wanted to (yes I know there are bounties for this, but we were paid much higher then any public bounty for this stuff)
As far as I understand the only way that it can work without without js is either by using @supports or similar feature/media queries (which would be the same for all users on the same hardware and browser) or by requiring user interaction (like a :hover state or clicking a link).
Why would I want to give out that part completely? Upcoming spec change has two important thing required that will allow it to work without JS or user interaction with the page. Obviously I'm not going to give it away before the spec is implemented and once it gains enough usage on the web that going back is impossible.
So basically you have no proof, no discernible connections to anything you are talking about (no real user/comment history) and are posting pretty outrageous claims about tracking.
Why would I believe you over all the public research available? If you really were working on this why would you be stupid enough to post about it here?
No, not this is some amateur work. State of the art techniques css fingerprinting can uniquely identify 1 device from billions.
Also this is nothing but getting dimension of screen and other browser attributes which are useless now. The current state of the art cannot be mitigated unless you put a 95% penalty on performance on the CSS engine AFAIK.
All the links you provided describe techniques that would only work on mobile devices with access to the sensors. On my desktop PC there's no GPS, no gyrometer, no webcam and no browser access to my microphone.
All the public computer researchers and browser vendors are years behind the techniques to fingerprint devices (probably 5+).
Canvas, WebGl etc are techniques of the past. There are much more advanced ones, than can identify devices with completely uniquely (on both desktop and mobile)
Also we know when users fake their fingerprints, and the algorithm respects the decision even though we know who the user is despite faking with all state of the art methods.
Latest methods dont even use JavaScript. Just CSS is enough to identify every device uniquely but you'd need JS to send the data back.
Every public researcher I've seen are given honeypot techniques that they consider state of the art even thought the industry is way ahead of the researchers.