- Nick Craver, Architecture Lead at Stack Overflow
It's ridiculous. It's a text-based ad. At worst, it's a clickable image. At what point did it become okay in your minds to let advertisers run arbitrary code?
I've left ads turned on specifically on StackOverflow because 1) I want to support StackOverflow, and 2) I trust them not to run malicious ads.
I don't even care that they're running ads network-wide. But if they're going to be running these kinds of ads anywhere on the site, they're going right on the ad block list along with everyone else.
Imagine a TV ad that tries to make your phone call a 1-900 number so they can rip you off, and the station says they don’t know where it came from but they’re trying real hard to put a stop to it. And somehow watching the ads themselves before broadcasting them never crosses their mind.
In any other context we would call this a security vulnerability. I think that label also applies here.
But that's assuming it doesn't try to connect elsewhere if it detects it doesn't have internet.
Imagine having to take countermeasures like this to prevent things you've purchased from spying on you!
No single publisher today really has the power to change much, no matter how big they are. The issue likes with adtech (like Google) and advertisers.
Malware and adfraud is primarily a business problem, not a technical one.
It's not that simple. There are many layers in the supply chain that currently requires JS. Publishers can't disable the JS and they can't demand JS-free creatives either.
You’re saying that doing this would drastically decrease ad revenue. Which is what I’m saying too: it’s about money, not necessity.
Would a site like SO be unable to survive without ads that run arbitrary JS? I don’t know. Even if the answer is that they must do this to survive, it’s still insane that content companies let randos inject arbitrary code into their pages. If this is so entrenched in the industry that there’s no way around it, that just means the industry is insane.
Simple doesn't mean it's easy or realistic. Yes, adtech has major problems but they're being slowly worked on and won't change overnight. This applies to any other industry where you think can just walk in and solve everything if everyone just did X. Reality doesn't work that way.
Of course reality doesn’t work that way. Ad companies aren’t going to change, because they like money and don’t give a shit about users.
We’re stuck in a local minimum. It’s insane. It could be easily fixed if everyone just stopped doing the insane things. And they won’t stop.
And yes, I'm enjoying my 90's internet and enable JS when it is needed (rarely) for specific domains.
* adverts are vetted by a human
There have been a few interesting blog posts from businesses outside of the adult entertainment industry where they discuss just how work is involved in getting an advert approved on adult sites.
It’s a sad state of affairs when an adblocker is less required on porn sites than it is on Stack Overflow.
Adult ads are definitely not better and are served by even looser networks that allow anything. That industry has pioneered things like popunders, clickjacking, and monetizing every possible action on a window while serving as the primary vector for malware and browser bitcoin mining. I'm not sure what blog posts you've read but the only strict standards they would have is on getting paid.
What you’re effectively doing is looking at Source Forge and then arguing that Github, Gitlab and Bitbucket are all probably just as bad.
* or have a human manually vet every creative
The problem here is you neither want to control their access nor take responsibility for monitoring their access. So the blame equally lies with yourselves for not managing an easily exploitable vector of attack.
If this were any other system, eg VPN, security professionals would tear you a new asshole and point out just how irresponsible your lack of management is.
You’re only excuse here is greed and frankly I’m disgusted.
I'm not sure who you think I am or why you're accusing me but none of this is down to a single person.
Could we require advertisers to sign their ad code to have a trail of where it came from, prevent tampering, and make it easier to pull the plug on bad actors?
The people bearing the costs of the internet ad economy aren’t the people in any position to do anything about it. So there’s very little pressure to fix anything.
Maybe if the US government started threatening to enact something like GDPR unless the a democratic industry gets its shit together.
You also need to somehow <iframe> the ad content (and serve it from somewhere else with the feature policy header set/attribute on the iframe set) or else sacrifice use of these features on your own site.
The solution is to make the ads inert. They do not need to run code.
I agree that StackOverflow is at fault here, but enabling JS is not a binary choice — "allow all JS on this site" vs "block all JS on this site" are not your only options.
Tools like uMatrix allow me to control JS coming from different domains on different domains independently. For example, on SO I have enabled JS from Stack Exchange and related domains, but not from Google or other snoopers.
"The ad is attempting to use the Audio API as one of literally hundreds of pieces of data it is collecting about your browser in an attempt to "fingerprint" it... Your browser may be blocking this particular API, but it's not blocking most of the data."
Are you really aware of the issue? The issue people have here is not the fact that the ad is trying to access the audio api per se but that it is trying to fingerprint the users.
This is not just ads, but about fingerprinting and tracking users somehow or the other by third parties. It's plain evil, and not a decent thing to continue foisting on your unsuspecting users after you've known it. Tell management to take an ethical stance and preserve the reputation of SO.
The only time they'd do that is if the marketing team decided that the value-add from taking ads off cancelled out the profit loss from taking the ads off.
Maybe he (or someone else in the team) has already given this as a temporary solution but it's been rejected. Since we don't know what's going on in the background, this suggestion being put on a public forum is still worthwhile. It could also help external parties (like HN readers) add more pressure in not letting this kind of surveillance continue just because the company doesn't want to stop making money while they're working on a solution or waiting for Google (or someone else) to help.
Every minute they delay cutting this off puts thousands of people in a position of vulnerability.
- Stack Overflow makes a blog post about not using dynamic ads.
- Dynamic ads found on Stack Overflow, with aggressive fingerprinting.
- Architecture Lead doesn't know how this happened and is getting serious.
I have so many questions. I hope this gets a post-mortem.
Perhaps you should stop doing that.
If you're serious about this, I've built tools for the publisher side for stopping exactly this.
My email address is in my profile.
It's hard to read the obfuscated code and be sure what's being done with the browser environment information. This script seems to generate some hash and put in some global variables, presumably for some other script to consume. I don't know whether such scripts send it to a server, compare it locally to a previously-known value, or ignore it.
People looking at the source code, like what happened here.
Imagine if all the ads in the print edition were spying on and tracking your every move.
Re-selling digital personas as commodities must be far more lucrative.
Is that the evil twin?
Their other income is from job ads, and I guess the value is that they have lots of data points about their logged in users (with scores high enough to imply they've interacted with the site a fair bit), in the form of what is posted, worth more than the aggregated list of websites that a user sees (as reported by ads).
I'd love to know more about this, as I have very little understanding of the economics of serving targeted ads. How much can they be making from ads?
This library is very popular.
Google has is currently as far away from their previous world famous "don't be evil" corporate culture.
Other examples are AMP where Google wants to make it harder to de-individualise URL's. This is being driven to an extend where Chrome on Android makes it harder to edit the URL.
Or games like Egress or PokemonGo, which in my opinion helps Google constantly update their WiFi SSIDs-To-GPS-location database.This database is rhen furthermore being used to track users location through a little permission called "WiFi Control", which also can not be found in the regular App Permissions settings entry.
To me WiFi-Control sound nothing like location tracking. But I have to admit, I am not a native speaker. Therefore I might be misunderstanding something.
How We Make Money at Stack Overflow: 2019 Edition: Taking money from Microsoft and Google fingerprinting our users 100+ ways
1. Text based ads only (no third party js)
2. HTML based ads but no js (run it through DOMPurify https://github.com/cure53/DOMPurify)
3. Look for a js sandbox -- this _will_ break arbitrary js, will not be supported in all browsers, and will require dev work on your side:
* Google Caja https://github.com/google/caja
* MentalJS https://github.com/hackvertor/MentalJS
I think using a sandbox iframe is not going to be able to defeat browser fingerprinting, because the sandbox control options are not rich enough. You would need to block all JS.
Or use iframe.sandbox, which was designed for it. https://www.w3schools.com/tags/att_iframe_sandbox.asp
1. scrollbars and positioning can cause problems with iframes that an inline div doesn't have, especially if there are multiple small iframes on the page.
2. As soon as you allow script in the sandbox iframe, then you are susceptible to these types of fingerprinting attacks. The fact that you have origin isolation doesn't really block what the ad was doing. This is because iframe sandbox was never designed to block fingerprinting attacks, it was design to create a separate origin that gave the dev broad control over features like 'allow js' 'allow access to origin', etc.
I'm not quite sure what you mean here, but I'm curious. Have any examples?
But at the same time, you want to see all the content in the iframe. If you knew ahead of time exactly the layout of the text in the iframe you could do this, but it's harder when you have dynamically generated content inserted into the iframe, and now add to that wanting the page to be on different devices with different viewports, resolutions, users resizing the page, users increasing or decreasing text sizes for accessibility or changing default fonts.
And if you don't control the content, some of it may contain fixed size elements or absolute positioning inside the frame.
It's a really difficult problem that we were struggling with before ultimately giving up on trying to use iframes for this purpose. And when you make a mistake you either get ugly scrollbars in your iframe or part of your content is cut off when the user resizes the page.
That's why I think the idea of running each site in a container is so effective.
And while we're at it the container should just spit out random shit like different resolution, audio api, user agent, once in a while (unless the user turns it off) to thwart such attempts.
Unfortunately when the creator and maintener of 67% of all browsers is an ad company who is exploiting this in the firsr place, then there is no chance that this could happen
My guess is the difference between "regular CSS that adapts to screen size" and "responsive CSS" is that the former only has a single set of rules while the latter has different CSS rules that get enabled/disabled based on screen size.
Conditional rules -> different content gets loaded -> server gets notified of what rules are enabled -> fingerprinting
Things went downhill once we started writing HTML which required knowing screen size ;-<
That way, the page displays correctly for you, but the server has no idea your actual fingerprint.
There's some trickiness to get this to work right; the collection of fake fingerprints would have to have a certain amount of persistence, because if it was regenerated every pageload, the server could probably tell that only one fingerprint kept showing up repeatedly. Maybe each fake fingerprint should have a completely realistic-seeming browsing session, happening in parallel with your real one, with half the collection continuing on browsing even after you're done? Except wait, ads could just separately target every fingerprint, and it doesn't matter if 99% of them are fake as long as its accuracy for your real one is still good. To defeat that you need the randomized activity using your real fingerprint.
The ideal would be if this was done through a proxy server, which would then know every fingerprint ever sent to a website. It could then provide you with a random collection of past fingerprints that have actually visited the same website, so every visitor gets a collection of fingerprints randomly drawn from the same "bag", rendering visitors indistinguishable.
Considering the alternatives, that sounds really appealing for me. I'd also buy it for my less tech-literate parents.
Also, ISP-level adblock will lead to tons of support requests, esp. when news websites start blocking that ISP and tell customers to call the ISP's support to "fix" the internet.
Simple corporation block list (e.g. Facebook, Google) https://github.com/jmdugan/blocklists/tree/master/corporatio...
"Someone Who Cares" list http://someonewhocares.org/hosts/
Ultimate Hosts Blacklist: 1 million blocked domains (once in a while you might need to unblock something) and also a bonus known hacking IP blocklist. https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist
It’s not harmful, as long as you’re not one of the people who gets tricked. But it does indicate that they want to do you harm, and try to. That they failed doesn’t make it all better.
that sure would keep me up at night.
obviously, i know google does more, but it seems like a large chunk of their revenue must be dependent on shady technical tricks like these working.
Other options would be that if you are a content distribution company, e.g. youtube, google, facebook, twitter, instagram, etc. then you cannot have any control of the client side applications that consume the content. Trustbusting would come into play here.
Or legal obligations to follow a user's desire not to be tracked with real criminal fines and jail time applied to executives, managers, and developers who failed to follow the law.
It's easier to just use Firefox with uBlock Origin, Cookie AutoDelete, etc etc.
If you want to buy advertising online you're probably gonna end up dealing with them either directly or indirectly.
Audio feature detection isn't even a novel techique.
I've seen trackers look at download stream patterns to detect whether or not BBR congestion control is used, I have seen mouse latency based on the difference between mouse ups and downs in double clocks and I have seen speed-of-interaction checks in mouse movements.
Just checking for the constructor of something an ad might legitimately use (like audio) is relatively benign to be honest and it is naive to expect ads to not do this and it is why I use an ad blocker even on sites without annoying ads
See also the recent decision to allow animated banner ads on various Stack Exchange network sites.
The fact that even people of a big site like stack overflow don’t know where it comes from instantly, is only further proof that using an adblocker is a resonable decision.
Maybe it is naive, but all ads should be in my eyes is a picture and something that counts the page views. And when you are a site that has ads as it’s main income you should have at minimum one employee who knows and tests each ad before it gets accepted and put onto your server.
Only then your customers will trust the ads you use and only then any reasonable person can even consider deactivating the adblocker for your site.
I am pretty sure somebody explored this idea before me, why doesn’t it work?
It would be interesting to see where we are in ten years.
There is some discussion of the technical cat-and-mouse game he has to play as advertisers try to make their content avoid detection and blend in with the regular programming. In this version of the future, the ad blockers eventually win and network television is destroyed. (The book also features networked computers and email ("telefax"), but the concept of ads appearing on them was still too futuristic for 1985.)
Adnix and Preachnix were the essence of capitalist entrepreneurship, he argued repeatedly. The point of capitalism was supposed to be providing people with alternatives.
"Well, the _absense_ of advertising is an alternative, I told them. There are huge advertising budgets only when there's no difference between the products. If the products really were different, people would buy the one that's better. Advertising teaches people not to trust their judgment. Advertising teaching people to be stupid. A strong country needs smart people. So Adnix is patriotic. The manufacturers can use some of their advertising budgets to improve their products. The consumer will benefit. Magazines and newspapers and direct mail business will boom, and that'll ease the pain in the ad agencies. I don't see what the problem is."
Adnix, much more than the innumerable libel suits against the original commercial networks, led directly to their demise. For a while there was a small army of unemployed advertising executives...
It's clearly always possible to detect whether an ad was seen. Often, the content owners do not bother putting such measures in place, but the advertisers definitely do. (It's even easier from their perspective to check if the ad has been served or not, as many ad blockers prevent the ad from even downloading by sending all requests to that domain into a black hole and so the ad is never even requested from the server.)
I agree that the lobbyists will win. I wish we had politicians with some moral character though.
I love utopian visions of the future.
Personal opinion: Laws are needed to make what advertisers are doing illegal. Advertisers are spying on people to the extent where if the government did it they'd need a warrant.
Unfortunately the water hole has been poisoned, so now I have to block it all.
But when ads block content; include flashing animations, audio, and video; and take up more layout space on a site than the actual content; then people have had enough.
Meaning, if advertisers hadn’t built more and more intrusive ads and had stuck with static ads that don’t severely harm the UX, then I doubt most users would bother with ad blockers.
The advertiser arms race has resulted in a classic tragedy of the commons. That's my diagnosis of the problem. Traditionally regulation is needed to fix that. Exactly what that entails is beyond me.
- Chrome is the fast one.
- IE is the one that have to use for some government/old websites.
- Ad Blockers are for safety (akin to anti-virus).
This was from a group that didn’t even know how to install Chrome on the new computers they got this year.
Group knowledge on this topic is largely going to be driven by what they’ve heard in the news or perpetuated by their social circles. And scary things will stick for long after they stop being true.
I think most can agree here this level of spying on users is bad. Its sorta like child labor but a lot less obviously bad, in that it is obviously bad, nobody likes it, but there's enough taking advantage of it not being illegal, so its just socially tolerated thing. But once made illegal it will be looked back on like "how the hell did we think that was okay? how the hell did we willingly let it occur?"
Kind of like how https://old.reddit.com/r/gaming/ is just a sequence of ads being flawlessly delivered to an ad-averse demographic that eats the ads up.
Savvy users will continue to block on machines that aren’t walled gardens and through pi-hole style blocking.
I think the cat and mouse aspect will be completely overshadowed by tech giants continually neutering their users ability to block ads.
Safari on iOS allows for content blocking, and Firefox for Android allows users to install extensions.
And, I was referring to the future and trends rather than the current situation. System wide ad blocking used to be possible on iOS without jailbreaking, now it’s not.
I expect in time google will go similar and change android APIs, or play store rules, to do similar.
It also works inside apps adopting Safari View Controller.
At the very least, though, eventually advertising agencies will hopefully figure out that this sort of tracking is pointless; "newspaper-style" ads are more likely to actually engage with the people encountering those ads (since said ads would be selected based on the page content rather than the person reading that content). This is how DuckDuckGo's ads work; the sponsored results are selected entirely by the actual search query. If content-driven ads (plus affiliate links, but I somehow doubt that's enough of DDG's traffic to be a deciding factor here) is enough to pay for enough computational power (and the development team to run it) to serve up 30+ million queries a day, then there's no reason it can't be enough for any other site.
Security-wise, I think the best we can hope for is more and more OS-like sandboxing and isolation, capability-based security, and other defense-in-depth measures.
Privacy-wise, for defeating tracking and the like, ideally I'd hope for technical countermeasures to win the battle, but if we do end up having rely on legal measures, they have my full support, GDPR and CCPA included.
(Random idea for a technical countermeasure against fingerprinting: have you heard of those projects trying to defeat behavioral tracking where, whenever you visit a page, it simultaneously opens a bunch of other random pages in the background, hidden from you, and simulates activity on them, the idea being that Facebook has no idea what actual websites you like to visit because it's lost in the noise? What if instead, whenever you visit a page, your browser or a plugin or a proxy or whatever opened the same page simultaneously in a bunch of hidden background windows, with a random configuration of audio enabled/disabled, user agent, screen resolution etc fingerprinted characteristics?)
Indeed it is. It is not, however, dependent on running arbitrary Turing-complete code in my browser automatically and without my permission. Write-once-run-anywhere is perfectly possible and feasible under the traditional "download and install this program and run it" model.
I'm optimistic about WebAssembly (on that note) because of its usefulness beyond the browser; like I described in a different comment, it's only a matter of time before we start seeing GUI-enabled WASM runtimes that allow WASM-modules-as-programs to work as desktop or mobile apps indistinguishable from their native (or kinda-native, in the case of Android) counterparts.
You can't build apps without turing complete code. We would be back to downloading and executing applications/programs.
Sure you can. None of these things should require me to run your arbitrary Turing-complete code in my browser:
* Reading an article
* Writing an article
* Shopping online
* Searching for things online
* Reading social media posts/comments
* Submitting social media posts/comments
* Browsing a code repo
* Submitting issues / PRs / etc. to a code repo
* Reading documentation
That (non-exhaustive) category accounts for a solid 80% of everything I do online (and the other 20% are things which I'd rather be doing through native apps). All of these things should be possible (and indeed are possible) entirely with HTML (and optionally CSS) + a server somewhere handling the backend logic. If they're not, then your "app" is over-engineered, or it is indeed better off as something I explicitly download and install, which brings me to...
> We would be back to downloading and executing applications/programs.
Good. That's the direction the mobile world has already been going for a decade now. Native apps actually integrate with the platform. Web pages don't (or at least don't do so well). At least in that situation I'm explicitly "downloading and executing those applications/programs" by my own choice.
We even have things like WebAssembly now, with experiments and effort toward making it usable as a general-purpose compilation target/runtime outside a web browser. No reason why it'd take more than a decade for someone to figure out how to wire a WebAssembly module into some sort of Qt-based (or whatever) runtime + UI and get the best of both worlds.
I genuinely don't understand this argument at all -- either you understand something about native platforms that I don't, or you're working under the assumption that all of your native apps:
a) aren't already vacuuming your data at the same rate as web apps.
b) wouldn't get considerably worse if they replaced the web ecosystem.
On the first point, native sandboxing is almost universally terrible. There's some promising stuff happening (notably with MacOS and with Flatpak/Wayland) but it's all just playing catch-up to where the web was years ago.
Pick just about any company that maintains both a website and a native version of the same app -- almost universally, the web version is safer to use. Nobody should be installing Facebook, Twitter, or Reddit on their phone. In fact, I would say the single best piece of advice I can give to anyone to improve their privacy/security on their phone is to stop installing things.
On the desktop, the situation is better, mainly because the desktop is very slowly turning into a niche platform and the web is a much more attractive place to put skuzzy, privacy-violating software. But this is a bit like the old argument that MacOS was more secure than Windows because no one was targeting Mac with viruses at the time. Get rid of the web and all of those skuzzy developers you hate aren't going to go away, they're just going to start making native apps. Where, again, the current sandboxing for most users and OSes is completely inadequate.
The unfortunate, horrible problem, is that running code we don't trust is gonna be necessary, no matter what world we move to. Sandboxing and permission systems are something we are going to have to figure out. Web or not, there is never going to be a world where you'll be able to trust all of the code you run on your computer. And currently, despite the many problems that browsers have, they're still still the best consumer-accessible solution for sandboxing code.
Of course integration and app performance suffers on the web. But frankly, neither of those are more important than sandboxing.
> almost universally, the web version is safer to use. Nobody should be installing Facebook, Twitter, or Reddit on their phone.
Of course, this isn't really true, precisely because so many websites that could function fine without JS (including things like news sites that should just be static content!) instead choose not to.
Which of course is the real problem with yellowapple's idea. Lots of services cripple their mobile website and push you to install their app instead; if we removed JS from the Web, everyone who could would just start doing the same on desktop too, right? Upstarts trying to maximize growth probably will work great on the Web, but as they get more established they'll start pushing people more and more towards their native apps, and existing established players will do that from Day 1 (of the new, JS-less world), including everyone mentioned so far—Facebook, Twitter, Reddit, GitHub, major news sites, because people will deal with the one-time friction of installing the app in order to access the network or content.
And in the proposed 10 years being discussed here, there's no reason to believe locally-installed applications won't have exceeded browser sandboxing capabilities, let alone caught up.
Meanwhile, the web sandbox is actively deteriorating specifically because frontend developers want to do the things locally-installed applications can do.
> Nobody should be installing Facebook, Twitter, or Reddit on their phone.
Not at the current state of native app deployment, no, but that's improving rapidly and substantially, especially in the mobile space. Also: the vast majority of users are doing that anyway, so it's worth investing the time and energy into being able to sandbox apps without needing an entire HTML + CSS + JS engine/stack to do it (and indeed, both Google and Apple have made significant strides on that front in the last 10 years, though there's certainly still room for improvement).
> The unfortunate, horrible problem, is that running code we don't trust is gonna be necessary, no matter what world we move to.
Yes, but at least with a locally-installed app, I'm explicitly opting into that app existing and running on my device. This on its own will at least somewhat cut down on the amount of untrustworthy code running on my system.
> Of course integration and app performance suffers on the web. But frankly, neither of those are more important than sandboxing.
No, but sandboxing - again - is a problem that can (and almost certainly will) be solved within the next decade, at which point integration and performance benefits will make local app installation even more attractive than it already is.
What? Sure there is!
1. The reason to believe native apps' sandboxing won't exceed the browser is that any sandboxing that works on native apps would also work on the browser app itself.
2. There's also 2 reasons to believe native apps' sandboxing may always be inferior to the browser:
(a) The Web has wider reach, and people are already more confident/careless visiting strange websites than downloading and running strange apps, so exploits targeting the Web are more valuable and therefore more resources are spent battle-testing it.
(b) Native apps currently have deeper access to the device which makes it easier for them to do bad things, and (similar to reason 1) will never have less access to the device than the browser app which is also an app.
(I'm aware there's arguably a slight exception here about Mobile Safari and W^X, but I don't think that disproves the overarching reasoning.)
They've already won.
This issue is cross domain tracking like we see with ad network that profile you over many different sites.
On the other hand I think that it is the one that blends the best and can even have some value. For example I can imagine that people enjoy a car heist movie more if actual cars that exist are being used as opposed to some made up stuff.
In France there are laws against "accidental" advertising so in news and TV almost any brand will be taped over or blurred. It is actually way more jarring and ugly than just leaving it as it is. It is especially funny when you have the logo of national rail company, which is basically a gradient, blurred.
In fact, I almost spat out my Coca Cola, I was so surprised that you would think that.
There is no reason these ads should be anything other than a linked image.
e.g. Browsing to an arstechnica.com article, with speakers on but nothing else playing.
Ad URL: https://static.adsafeprotected.com/sca.17.4.95.js
JS Domain: adsafeprotected.com
Domain Owner: Integral Ad Science, Inc
Google's recent stance on the matter of fingerprinting:
>Chrome also announced that it will more aggressively restrict fingerprinting across the web. When a user opts out of third-party tracking, that choice is not an invitation for companies to work around this preference using methods like fingerprinting, which is an opaque tracking technique. Google doesn’t use fingerprinting for ads personalization because it doesn't allow reasonable user control and transparency. Nor do we let others bring fingerprinting data into our advertising products.
The important part being: _Nor do we let others bring fingerprinting data into our advertising products._
The same company advertises their fingerprinting capabilities:
>Browser and Device Analysis: We analyze the technological fingerprints of browsers and devices in order to uncover bots fraudulently posing as human users. We can validate what type of mobile or desktop device a browser is running on, providing additional context with which to identify fraud.
And it is this fingerprinting that gets them selected as a Google Brand Safety and Viewability Preferred Measurement Partner
>New York, NY – Integral Ad Science (IAS) has been selected as a preferred partner in Google’s Measurement Program for both brand safety and viewability. Partners were selected after meeting rigorous standards for accuracy and using reliable methodologies to measure KPIs that matter for marketers. The program is designed to make it easier for advertisers to source trusted, third-party measurement providers.
The gist of it being that Google has heavy cognitive dissonance, with their advertising wing rewarding partners that fingerprint users (against their own policies), and the Chrome team barely managing to introduce some anti-fingerprint measures, which are clearly not enough.
Perhaps, but I think some of that behavior only appears dissonant. Like the NSA, Google often uses carefully constructed language that is designed to sound like a statement about a topic of concern without saying anything actually useful. For example:
> Google doesn’t use fingerprinting for ads personalization
The only reason to add "...for ads personalization" is if they are using fingerprinting for for other purposes. This could include other ad-related purposes like attribution.
Google claims about not using specific data for a specific purpose are probsabl7 true. They simply fingerprint (and probably correlate) everything else.
If you don’t use an ad blocker you should consider your computer compromised.