Hacker News new | past | comments | ask | show | jobs | submit login
Ad tech firms test ways to connect Google’s FLoC to other data (digiday.com)
327 points by EdwinHoksberg 5 days ago | hide | past | favorite | 219 comments

Ongoing related thread:

Privacy Analysis of FLoC - https://news.ycombinator.com/item?id=27463794 - June 2021 (2 comments)

I might be missing something, but wasn't it part of the plan? Of course the goal was to offload the actual interpretation of the FLoC id to third-parties, as FLoC id themselves do not have any semantic signification.

Not saying this is ok, but this was the plan from the beginning, being able to offload responsability far from Google. This article[1] actually shows what it's like for an advertiser.

[1] :https://cafemedia.com/early-status-of-the-floc-origin-trials...

EDIT: Did not finished the article before posting, they actually talk about cafemedia's work.

Part of the plan but not part of Google’s consumer-oriented hype for it. Google has intentionally declined to state the fingerprint-joining and dollar impact to third parties because it would detract from the (already specious) message that FLoC improves privacy. Perhaps during some anti-trust hearings in the future we’ll see internal emails that detail how much $$ Google estimated that FLoC would take away from other advertisers as a function of what level of fingerprint joining they succeed at.

FLoC is NOT proven to be incentive-compatible with consumers and how they value their privacy. The only guarantee is that users are (on average) harder to distinguish within a cohort. Google absolutely studied the possible economic consequences of FLoC prior to announcement and they are hiding that study. Either because the results are crappy, the Google Ads employees are less competent than they were a decade ago, or both.

> FLoC is NOT proven to be incentive-compatible with consumers and how they value their privacy. The only guarantee is that users are (on average) harder to distinguish within a cohort. Google absolutely studied the possible economic consequences of FLoC prior to announcement and they are hiding that study. Either because the results are crappy, the Google Ads employees are less competent than they were a decade ago, or both.

Could you walk through your ideas of the incentives here? I'm curious because while I like the idea of FLoC in general, Google is the last company I trust with this. Moreover there are a lot of details (such as cohort sizes) that could have the potential to mask identity and align incentives, but has been underspecified by Google so far.

Normally, incentive compatibility is a concept from game theory and, more significantly, mechanism design (most known for auction theory).

In simple terms, it states that the mechanism that (say) Google offers to the consumer is such that the consumer finds it optimal to act according to their true preferences. In so called direct mechanisms, this means that the consumer would prefer to reveal their true type (information about themselves), rather than trying to fake being someone else. If this is true, it is much simpler to build mechanisms that are robust and achieve good outcomes.

For example, Hotel booking websites are typically not incentive compatible if they offer different prices to different users. Some user who is known to accept high prices will be offered higher prices in the future. However, if this tracking is imperfect, the price mechanism is not incentive compatible. The consumer would prefer to pretend to have a lower willingness to pay, and would thus be offered the same hotel for a lower price.

On the other hand, bundled goods (like cars accessory packages) may be incentive compatible, because even though each package is somewhat imperfect for each customer, they still prefer the "their" package over another one that is less costly. That way, the car company can maximize profits.

In mechanism design (for example, in the design of auctions), incentive compatibility is usually a requirement to derive mechanisms to that maximize the objective function. This is so, because there are of course infinitely many complex mechanisms one could propose. But, under some conditions, incentive compatibility ensures that the "optimal" mechanism will have a simple structure.

Finally, there are substantive reasons: If a customer has to act against their true preferences, the system is likely unstable and gameable. And also, people might walk away.

Some people also consider the participation constraint a part of incentive compatibility. That is, the consumer must at least gain as much as not participating at all.

In the present case, the participation constraint is violated. The consumer gains nothing by being targeted and, if asked, would rather opt out. In that case, Google can design whatever they want, an informed customer will reject it.

Google and Ad firms know this, which is why it is virtually impossible to truly opt out. The participation constraint is knocked away by deceptive practices and dark patterns.

Right! This is the notion of incentive compatibility I'm trying to draw upon. One problem here is that not all of the incentives and objective functions are visible. That said, Google I'm certain has approved FLoC for launch based upon a mechanism, and so far I really see no evidence that that mechanism can actually be favorable to the wider public (not just "privacy geeks"). Would love to see Google's actual research that was provided to the launch committee, but I might as well ask for access to Area 51.

For the majority of users, I do believe tracking has some upshot, at least for reducing ad spam and ad fraud. It's really hard to prove this without a counterfactual study, but if you compare Facebook ads with the crap on news sites today, I think that situation is illustrative of a place where Facebook's improved targeting saves the user from a lot of spam / fraud. And I'd argue that outcome is a result of Facebook having a monopoly over their own targeting and identifiers.

As a business owner who wants to post online ads, I want the providers to be as competitive as possible on price and quality. As a user, if I'm going to have ads targeted to me, I'm hoping they don't blink, slow down my computer, or are the utter trash you see on today's poor news sites. As as user, I want competition to cause ad targeters to filter out ad spam-- it hurts the properties where they place things.

Any new ad provider (one that does not have 20 years of user data like Google, or 15 years like Facebook) will not be able to fingerprint as well with FLoC as they could with cookies. Existing FANG competitors like Verizon, AT&T, Comcast etc have lots of historical data offer and even some FLoC-like products but if Google's FLoC replaces cookies then it's harder to derive value from, say, Verizon's FLoC-- simply because a cookie is a more effective fingerprint than a Google FLoC.

So FLoC is anti-competitive and just further supplants Google's monopoly. Does that really impact consumer privacy? Well, if the ad market is less competitive, the level of privacy that the market can support will be less consistent. There will be people like Apple who make so much money off hardware that for ads they're fine with the unobtrusive choice of selling Google default search engine rights (versus, say, the loads of crapware that can be found in some Android distributions). But there will be lots of smaller ad players who will get more desperate for ad spam and more shady ways to target you with what little data they can get. For smaller properties, perhaps Google FLoC is so useless for targeting and lift that everybody switches to requiring a sign-up with a phone number SMS confirmation.

_Can_ FLoC be incentive compatible? What are the actual likely economic consequences? Google knows this (or they think they know), hence FLoC got launch approval. But Google won't share those details with you. They want you to think FLoC is just great for privacy. This is how Sundar addresses his position that "we need to work hard on user trust."

Yes, that was part of the plan.

This is exactly how FLoC is supposed to work, and this is exactly what privacy watchdogs have criticized about it.

The proper headline should be "Ad tech films test FLoC".

FLoC as a source of fingerprinting bits has been an issue in W3C discussions of the project.


As a fingerprinting surface FLoC has similar properties to the Battery Status API -- not stable for the same user over long intervals, but can be used to help match pageviews from different domains that were close in time.


Imagine sites like facebook, reddit, and google themselves who have more monthly users than the FLoC ID has bits to count. Imagine they store each result for a given user, and assume that, with high probability, when that value changes it is fairly adjacent to the old value. Now, you build a graph database of IDs and their relations. Finally, you link it all to users' profile metadata. You can build statistical distributions around each ID node based on users' gender, race, family status, interests, etc. and use those probabilities to guess at the precise interests of each new and unknown visitor. Also, those sites have a lot of outbound links. Now you can figure out that a particular ID has a high correlation with a particular domain, too.

But only the big sites like google have enough users to birthday-paradox their way into a meaningful ID graph, so you're safe from that tiny ad startup that also happens to be threatening google's business model...

Google, Reddit, and Facebook also have sufficient first-party traffic to just gather data, target, and sell ads the "old-fashioned" way. The independent and small sell-side is also hit hard by this.

I didn't realize FLOC was already in use.

Looks like to disbable it in Chrome... you have to find a deeply nested config and then tell chrome you want to "disable privacy-preserving features". !?!?

I can do this. But of course nobody else is, it's telling you are disabling privacy-preserving features! Its' making me kind of livid.

I'd hate to be that guy. But Chrome is not your friend when it comes to privacy. Chrome is the adversary.

If you're using uBO, it's defused by default: https://github.com/gorhill/uBlock/releases/tag/1.35.0

I thought that µBlock Origin didn't work on Chrome?

If you want privacy, stay on Firefox.

My understanding is that it's gimped on Chrome. It's still better than nothing.

Firefox unfortunately is becoming harder to use with every release.

by disabling it you're giving more uniquely identifying info to the fingerprinting backend apps lol

"the only wining move is not to play"

really, if you are in a point where you hunt for hidden settings to disable in your browser? why not just use firefox?

You can either obfuscate with stuff Like Canvasblocker and Brave, both which expose slightly randomized prints, or 'poison the well' with Ad Nauseum.

what uniquely identifying info does the feature disable, in exchange for FLoC?

But this can't be, people assured me that FLoC was an improvement to privacy, and that it would provide an alternative to persistent identifiers and profiles, and that it would help end the arms race on fingerprinting.

You mean to tell me that FLoC will be used for fingerprinting anyway, and it changes nothing about advertiser's strategies and tracking techniques, and they won't self-regulate, and that it doesn't work to throw them bones of extra data and hope that they'll willingly stop their abusive behavior if we meet them halfway?

This is a shocking development.

The only consolation is that Google's next privacy compromise with the ad industry definitely won't suffer from exactly the same problems. The best thing for us to do now is to assume that this is a completely random, one-time fluke that doesn't reflect anything on the industry's character. No need to change the way we engage with the advertising industry on privacy issues because of it. We should keep offering them compromises that make it easier for them to track users, and keep assuming that they'll in good faith regulate themselves.

> End the arms race on fingerprinting

Google is known to fingerprint you on their sites[0] and this practice will continue unless some sort of political action is taken to make fingerprinting illegal. WebGL is not the only heuristic used to reliably determine it's a specific device accessing a site, but a whole slew of techniques can be used to reliably determine it is 'you' who is on a site (you can even detect if a browser is running in a virtual machine, among many other techniques to fingerprint).

To mitigate this, I do most of my browsing with JS disabled by default, and if I really need JS turned on (for a site I trust like my bank), then I temporarily turn it on for that specific site. Also you can just disable WebGL in Firefox in about:config but keep in mind, there are many other techniques Google and `ADTech` in general can use to fingerprint you.

[0] https://jonatron.github.io/webgl-fingerprinting/

This is a complete sidenote, but I don't understand why Firefox's Canvas controls don't affect WebGL.

Firefox has per-site settings for whether the canvas should be accessible which are very useful, but they don't have per-site settings for WebGL, it's either on or off for the entire profile. Which kind of defeats the point of Canvas blocking since (at least last time I checked) WebGL fingerprinting is possible regardless of whether Canvas can be read from.

I'm sure there's some technical reason, but it really seems like turning Canvas reads off for a site should also turn off WebGL.

Here is a Firefox bug report suggesting a per-site permission for WebGL: https://bugzilla.mozilla.org/show_bug.cgi?id=1694456

There's been no Mozilla comments on the bug yet. Perhaps so many sites use WebGL that the user would be pestered with too many permission prompts?

Google turn every result on their search result page into a tracking URL if you have JS disabled. The solution is to allow JavaScript and install the Google Search Link Fix add-on, until they break that too I guess. Or use another search engine.

Fingerprinting on your own sites is pointless since only third party cookies are being removed. First party cookies still work like normal.

It gives you the ground truth with which you then correlate fingerprints from non-first-party domains.

The whole point of it is to correlate it to other data using the same fingerprinting method. Google is often a first party and a third party, so they need a way to line all that data up.

Won't turning js off make you one of the few people in the world that do this and hence easier to fingerprint?

I have seen this type of reasoning before in HN comments but from a user's perspective it does not make sense. Imagine every user is sending a maximum amount of information, which we can see keeps increasing over time, via HTTP headers (including cookies), browser capabilities, hardware capabilities, etc. This "run with the herd" reasoning seems to suggest the best way to avoid fingerprinting is to send the maximum amount of information, "like everyone else". That only results in ever more information being sent to the online advertising industry. The probability they can distinguish one user fingerprint from another goes up as the amount of information sent increases. The objective of the online advertising services company is to gather as much information as possible from users.

The objective for the user should be to send as little information as possible. If a fingerprint shows the user is not running JS and is providing only a very minimal, generic set of information, how much value is there is trying to serve ads to that user. Users who want better privacy should be trying to reduce the amount of information they send. Maybe the first movers in that effort are "fingerprinted" as being privacy-conscious, tech-savvy, etc. That is probably going to result in less ads served to them, not more. Eventually, when most users, "the herd", is sending the minimal amount of information, the fingerprints all look similar.

Think it through. Advertisers do not care about users who will not indiscriminantly run JS. They go for the low-hanging fruit.

I'd posit that the biggest risk for advertisers is "plausible bullshit". Their ability to say "look at our huge tracking profiles" is dependent on both quantity and quality of data. If ad networks can't accurately sanitize their data, advertisers are going to balk at spending $6 per click for misprofiled audiences, when they can spray-and-pray "good enough" contextual ads for 30 cents a click.

Give me a VPN that regularly geolocates me at a Starbucks 30km out of town. Give me plugins that stuff my search history with a fixation on the Cincinnati Bengals and replacement parts for a 2013 Hyundai Accent. Yeah, they might see my actual traffic patterns, but the goal is to make it expensive and hard to filter the real use from the elaborate story.

You're just added to a (very large pool) of people who browse with JS turned off. Turning JS off as a default is a common thing.

Amongst the top 1% of tech savvy users, maybe. In all my years of supporting 100,000s of “regular” users I’ve never encountered anyone with JS disabled.

Even the fact that WebGL is disabled contributes some entropy that can be used to identify you.

You can't detect if WebGL's turned on/off if JS is turned off. You need JS turned on to detect WebGL's presence

> Google is known to fingerprint you on their sites[0]

To my understanding, Google (and many other sites) use WebGL and other fingerprinting techniques to distinguish real users from bots.

This does not mean they use it to track individual users (if that were even legal in Europe under GDPR).

Google's tracking consent flow is already in breach of the GDPR, so it's not a problem for them.

Why do you want to disable it? Isn't it nice to see ads that are interesting and useful to you rather than looking at some useless ads you're not interested in?

The issue is buried in the premise of the question.

If you’re going to see an ad regardless, would you rather it be relevant to you or not?”

The answer is and always has been “I don’t want to see an ad in the first place, I don’t want you collecting any information about me under any circumstances, and anything that makes ads spots worth less is a positive.”

You know the cheeky “you won’t see fewer ads, they’ll just be leas relevant” line? The only long term solution to actually see fewer ads is to drive their value down to nothing.

The fact that you can pay to remove ads on a lot of services kinda gives the game away that they’re a net negative. Why would you ever want them gone if they’re so useful and helpful?

AdTech is such a garbage industry — drunk on their own delusions of facilitating commerce and helping businesses reach customers while being so annoying and vile that the only way their products to function is to insert themselves into every facet of life because nobody would ever seek them out on their own. Bleh.

>The only long term solution to actually see fewer ads is to drive their value down to nothing

You can only hope that whatever marketers would do instead to promote their stuff is less annoying or damaging. I have my doubts.

There's two sides to the equation though (I work on Supply Side ad tech).

There's a huge amount of content that people are just not willing to pay for, but would gladly view ads. You may be willing to pay for Youtube Premium and Twitch Plus or whatever, but the vast majority of people do not.

Do I feel like I'm a horrible person because I help make websites more money so they can stay in business? Heck no. It's the only reason 90% of these sites are able to exist in the first place.

So what happens when the tracking / fingerprinting / data mining that your industry keeps doubling down on, provides enough data for

-- your next health insurance change to triple your premiums because of that previous bill you had such a hard time paying down

-- your mortgage / refi / loan application to be denied for factors completely separate from your actual credit report, but have made it into a reputation system

-- you get quietly, passively bypassed for that job application from the reputation hit from that one really poorly thought through social media post last century.

Data mining and data stores that affect people's lives and opportunities, that aren't just obfuscated, but actively secret are a blight.

I feel like gladly is overstating it a bit but I take your point. Users will put up with ads with a lot less resistance than a paywall. But that logic still doesn't really follow unless you think that businesses making money by literally any means they can get away with is an end unto itself.

If you have a service like YouTube that provides so much value and is such an economic multuplier that we can't possibly imagine society existing without it then why don't we just pay for it? The fact that we have no system to fund public goods that aren't ads and taxes is a huge failing. You're basically just describing a tax system that is paid in consumerism which sucks because it's inherently regressive.

If you have a product which is genuinely useful to hundreds of millions of people but that the value only materializes when it's available for 'free' to everyone then we should have ways of getting you funded that isn't attention or convincing individuals to pay you a subscription fee.

You're hitting on an important economic function that ads are currently doing but then twisting it around and saying that there's possibility for anything but ads to perform that function.

>You're basically just describing a tax system that is paid in consumerism which sucks because it's inherently regressive.

Why is it regressive? High earners pay more for ad-funded sites than low earners. That's not regressive. It's not progressive either, strictly speaking, but it's better than subscription fees which are regressive because everyone pays the same absolute price regardless of their disposable income.

> High earners pay more for ad-funded sites than low earners.

I'm not certain this is true. And targeted advertising certainly makes it less true, not more. Advertisers are not excluding low-income users from advertising, instead they're targeting more products at them that those users are more likely to buy.

Untargeted ads for higher-cost luxury products might make the Internet cheaper for low-income users, but that's not what is happening in the ad industry right now. Ads exist to get you to spend money, and they are just as targeted at poor people as they are at rich people.

I wouldn't be that surprised if the effect is the opposite, since poor people have less access to research resources, comparison shops, and trials that would allow them to combat the psychological effects of advertising. They're also generally under a lot more stress and time-pressure when they shop than rich people are, which is likely to make them even more vulnerable to manipulation during a difficult purchase decision.

>I'm not certain this true

I'm pretty sure it is, because nothing changes the fact that you can ultimately only spend what you earn (even considering credit). A share of that spending goes to advertising.

If someone can spend 10 or 20 times as much as another person after basic food and shelter then that difference trumps all other factors by a very wide margin.

I don't think it matters much, but just for the record: I don't believe rich people comparison shop as much as lower income earners. They buy what they fancy and they throw away what they don't like. I also don't think rich people are much harder to manipulate (but I'm not sure about that). They think much less before tapping that buy button. I know that much.

> If someone can spend 10 or 20 times as much as another person after basic food and shelter then that difference trumps all other factors by a very wide margin.

I'm not sure I follow? Google ads don't cost a percentage of the final product price. How much I spend on advertising might be entirely unrelated to the per-item cost of my product -- and how profitable my company is might not have anything to do with how luxury my company is, it might just be down to market penetration and my profit margins. Plenty of companies make enormous amounts of money targeting poor people.

Does an ad for a five-star restaurant on Google cost more than an ad for Taco Bell? And it's not like rich people are being shown a larger quantity of ads on a website than poor people.

My assumption is simply that marketing/advertising spending as a share of revenues isn't hugely greater for products that poor people buy compared to what rich people buy.

If that is true, then per person contributions to ad funded services would be roughly propotional to personal spending. Someone spending 10 times as much as another person would also pay 10 times as much for using Google search or Youtube.

If these services were subscription based then both would pay the same price in absolute terms, which is very regressive in comparison.

But just to make this clear. I don't claim for a second that my extremely crude calculation is anywhere near correct. What I'm saying is merely that subscription funding is very regressive compared to ad funding.

I don't know what the exact extent of that difference is and I cannot break it down to the level of pricing specific Google ads.

> If that is true, then per person contributions to ad funded services would be roughly propotional to personal spending. Someone spending 10 times as much as another person would also pay 10 times as much for using Google search or Youtube.

> If these services were subscription based then both would pay the same price in absolute terms, which is very regressive in comparison.

This is the part where you kind of lose me, unless a rich person is also watching 10x as many videos on Youtube. It seems like you're saying a fixed percentage of a person's spending is going to the websites they visit, but why would that be the case?

You will see the same number of ads on a Youtube video regardless of whether you're rich or poor. And the cost of each of those ads -- the amount of money that gets paid out by the business -- is based on the competition for the ad slot, not the price of the product. I wouldn't take it as a given that products like Taco Bell and Pepsi spend less on online advertising than a luxury watch brand. If anything, I would expect products in crowded consumer markets (ie lower-cost, non-specialized, mass-market goods and services) to have more competitive ad slots that cost more money to target.

So I understand that rich people spend more money, I agree with you on that. But I don't see how you're connecting that fact to the idea of more money from those rich people going to the websites who are displaying ads. I don't see the thread of logic that says that a product costing $500 per unit is contributing more ad money to a website than a product that costs $5 per unit.

>It seems like you're saying a fixed percentage of a person's spending is going to the websites they visit, but why would that be the case?

Very roughly yes. Some percentage of a typical company's revenues is spent on ads, and revenues from each customer are obviously proportional to that customer's spending. It's the same thing (leaving aside sales taxes).

Companies try to maximise the effectiveness of their advertising campaigns. The effectiveness depends on how many people actually go ahead and buy the product relative to how much the ads cost. If running ads on Youtube is less effective, then ads prices on Youtube would have to fall and Youtube would earn less.

Let's say only extremely poor people were using Youtube. None of them would ever buy a high-end smartphone. How much would high-end smartphone makers pay to Youtube for the honor of running ads there? The answer is zero.

Now let's say there are two groups of Youtube users. One group never buys a high-end smartphone. The other group buys one every year. Now it makes sense for smartphone makers to fund Youtube through their ads, but only the group actually buying smartphones pays for it. So the rich group effectively subsidises the poor group's Youtube usage.

I have chosen an extreme and unrealistic example to explain the principle. In reality, there will be a mix of products. The cheapest ones will be bought by almost everybody in roughly the same quantities, and some luxury goods are never advertised on Youtube at all. But the relationship between per person spending and that person's contribution to ad funding for the sites they visit still roughly holds.

This is what I think. I'm not an economist though. There are certainly many open questions as to how strong this redistribution effect is and what the effect of ad targeting is. But the claim that there is no such redistribution effect at all seems extremely implausible to me.

"Why would you ever want them gone if they’re so useful and helpful?"

Good point. Netflix replaced cable for a reason. People can stomach ads up to a point but I don't think anyone likes them.

Well said.

Seeing ads isn't nice.

No, I don't want that, for multiple reasons:

- I don't trust ad networks to give me more relevant ads, the data they currently have has not made my advertising experience better, so I don't see why giving them more data is going to fix the problem. I don't see strong evidence that advertisers know how to make useful ads regardless of how much data they have.

- I don't trust ad networks to target responsibly for my benefit. Ad networks are trying to manipulate me into buying products, they are trying to affect how I view the world. That's a hostile relationship, they don't have my best interest in mind, so "more effective" is not necessarily going to translate to my benefit. Ad networks are not trying to make ads more useful to me, they are trying to get me to buy stuff.

- I don't trust ad networks to only use tracking to improve relevance. I take it as a given that their tracking will be used for underhanded price changes, changes to UX to make it harder for me to complete certain actions, deal availability, geolocking, changing results when I comparison shop, and other anti-user practices.

- I want to have control over what data goes into my advertising profiles. Tracking me everywhere forces me to treat my advertising profile like I would treat a cat -- I don't want to do reinforcement training on my ads. With tracking, if I want to be advertised a certain product, I have to reinforce to the network that I care about it. If someone sends me a link, I have to think before clicking on it because I don't know what that will signal to advertisers. This is a really awful way to interact with computers in general, and it discourages people from freely browsing the web.

- Ad tracking creates an additional security risk for my data. I might get advertised an embarrassing product at the wrong time in front of the wrong person, that information might get leaked to other 3rd-parties that are somehow even less scrupulous than advertisers. There are multiple instances of ad networks effectively doxing people, outing their secrets. It's not safe to trust ad networks with that data.

- Even if none of the above was true, I don't take it as a given that even at a purely conceptual level targeted advertising is better than untargeted advertising. I disagree with the philosophical premise behind that kind of marketing, I think that marketing should be user controlled and based on signals that users consciously give about what they want to see. I think in most cases that users should start the search for a new product themselves and decide what they want advertised to them. Even if the advertising industry was ethical (which to be clear, it's not), I still don't want targeted ads.

- And even if I did want targeted ads, heck anyone who is tracking me for advertising purposes without my permission. If your product is so heckin great, then it shouldn't be a problem for you to get me to opt into tracking. The lack of affirmative consent is a problem, regardless of the outcome. You have to get people's permission before you do this stuff -- even if I'm happy with the result, that doesn't excuse you from asking my permission. And no, collecting the data anyway but just showing less relevant ads on the front end doesn't count. I don't want the tracking code on my computer at all unless I've invited it to be there.


Now, completely separately from everything above, I also don't want to see ads at all and I think everyone should block them and burn the entire industry to the ground regardless of the consequences. BUT that is not the primary reason why I'm against fingerprinting and user tracking. Even if I loved ads, I still wouldn't be OK with the kind of tracking that tech companies are doing, and I still wouldn't want them to fingerprint me.

I like to think of it this way: ads do not exist to serve value to me. Ads exist to extract value from me. They are an increasingly obtrusive and insecure means to convince me to spend money. They are, essentially, psychological warfare on my wallet.

And no, I am not being sarcastic, nor do I think I'm being overly hyperbolic in this.

As a solution, I propose a popup on each website that forces the user to accept FLoCs.

I have a suggestion to improve your idea: Allow the user to deny the FLoC if they wish, by redirecting them to a byzantine "Preferences" page with 2 dozen options to opt in or out of. Promise to apply the user's choices within 7 business days.

Bonus points if you make the preferences page a dungeon that users have to navigate with twisted meanings and options that work counter to the user's actual preferences.

Critical hit if the preferences page has toggles that actually don't toggle, and just remain "on". This tracking data sent to a third part is certainly crucial in development of a website.

The preferences page itself should have a modal popup requiring the user to accept the website's terms prior to using the preferences page.

And after going to the site, and being forced to toggle every opt-out option individually before saving... if you return to the site later none of your opt-outs are shown and you have to redo everything each time.

Well that was your fault. The seventh toggle on the list was clearly labeled, "Use cookies to track my choices for the preceding six preferences and the following eight preferences"

> people assured me that FLoC was an improvement to privacy

It is an improvement to privacy. Cookies uniquely identify me with no other information required. FLoC does not uniquely identify me with no other information required.

The opt-out is similar too: block cookies in the browser or block FLoC in the browser.

You're talking about FLoC as an alternative to third party cookies, but major browsers had already done away with those before the rollout of FLoC. If getting rid of third party cookies was two steps forward, FLoC is one step back. We're technically further ahead than where we started, but that certainly wasn't thanks to FLoC. Without it we'd be even further ahead, so that's what we should be advocating for.

It was created so that well-behaved adtech companies could target based on FLOC alone, without having to resort to fingerprinting. Just as before, less-reputable adtech will continue to fingerprint to try to advertise to people with third party cookies turned off. As chrome continues to implement fingerprint resistance technologies, these techniques will continue to be less useful for people trying to advertise not based on FLOC alone.

Basically, it’s a way for google to implement fingerprint resistance in chrome and default to blocking third party cookies without killing their own funding source.

> It was created so that well-behaved adtech companies could target based on FLOC alone, without having to resort to fingerprinting.

I think this is a pretty good take. With floc there is a possible storry to tell companies that want to target/customize, but only in the amount tolerated by the users.

Once thats established, it's much easier to go after shutting down businesses using less ethical means.

If you care about privacy, "well-behaved adtech companies" is an oxymoron.

> major browsers had already done away with those before the rollout of FLoC

No, not really - ETP only blocks the most technically literal meaning of "third-party cookie" while still allowing plenty of tracking scripts to work with shared first-party data.

Chrome has well over 50% of the desktop browser market share, which by some measurements makes it the only major browser, and FLoC is definitely a prerequisite to Chrome disabling third-party cookie support.

Despite Google's marketing, FLoC has nothing to do with the removal of cookies.

Cookies were going away regardless, every other browser is doing it, Chrome is not powerful enough to go against the grain on this issue.

Separately from removing cookies (which was always going to eventually happen), Google proposed FLoC because they claimed it would help advertisers accept the change without encouraging them to build another equivalent tracking method using fingerprinting. Unsurprisingly, advertisers immediately took FLoC and used it to build another equivalent tracking method using fingerprinting.

The mistake here is meeting the advertising industry halfway. Just remove cookies. You don't need to propose anything else beyond that.

It has everything to do with the removal of cookies: Google has very clearly been waiting to have a viable alternative before they start blocking them.

It’s been something like 4 years since Safari started blocking cookies. You say Google isn’t powerful enough to resist, but Chrome has >60% market share.

Google can delay, I do not believe that even with 60% market share they are strong enough to resist permanently.

It is definitely in Google's best interest to act like FLoC is necessary to remove cookies, but I don't take their marketing at face value. They care about being competitive with Apple; they were even forced to pretend to care about advertising IDs after iOS's recent changes.

:shrug: pretty much every other browser has rejected FLoC as well, so I guess we'll find out if Chrome is really able to just go their own direction. But I think this is one of the rare instances where people are overestimating Chrome's power.

I don't believe Chrome's team would be doing any of this at all if they didn't see the writing on the wall about where the industry is going. My take is that they're trying to get in front of an inevitable industry-wide change to mitigate it's impacts on their core business. It's not out of charity or real concern for user privacy that they're proposing any of these compromises, Google would be perfectly happy to stay in a world with 3rd-party cookies if they thought they could get away from it.

A lot of their recent proposals start to make sense when viewed through that lens. See their effort to propose a standard where 3rd-party sites can be treated like 1st-party. See also their increased efforts on moving away from URLs for domain scoping. See also Manifest V3. Google is scared about this. The are scared of the situation getting out of their control.

And even if Chrome is powerful enough to resist removing 3rd-party cookies forever, I'd almost prefer they do that. It'll make it easier to get people to switch off Chrome when it is objectively less private than every single other browser in meaningful, easily demonstrable ways. And we do need to figure out a way to break up Chrome's stranglehold on the web anyway, so every reason helps. With the addition of FLoC, Chrome will already be less private than other browsers since FLoC is a strict privacy downside over just removing cookies. So it's good for that loss of privacy to be even more private, and to remove Google's ability to hide behind a confusing narrative about how actually their fingerprinting vectors are good.

Maybe this will make people switch off of Chrome, but I doubt it. In just the last 3 years Chrome has gained >6.5% market share.

So what, those third parties will then request that first-parties run a cookie proxy to generate and relay your unique ID, in order to get better payout rates?

I would guess it won't be long until the guides/kits on how to proxy your requests roll out. With some incentive that makes it financially dumb not to do it.

> However, it is likely that the same FLoC IDs or range of IDs will be associated with someone.

This seems all quite speculative. In the first paragraph they describe FLoc IDs as changing constantly - why would they assume new IDs are not being generated, and groups are not constantly being mixed?

Further down it quotes ...

> “If your behavior doesn’t change, the algorithm will keep assigning you in that same cohort, so some users will have a persistent FLoC ID associated with them — or could."

When combined with other information that is already being used (such as canvas fingerprinting and other techniques), this looks like it can help narrow it down even further.

> “We can use that as another signal to create a stable identifier for them.”

Arguably for advertisers it’s a benefit - if your behavior changes to that extent, likely you’d be better targeted by different ads.

Right and floc was made for advertisers. They’re the customers here.

It’s extremely speculative. While I’m allergic to ad tech, I’m even more allergic to poverty, so I have occasion to work with/around that side of the industry.

The best way to explain it is that a lot of companies have been making significantly more money than their technology is worth. A number of initiatives have attacked the data-in side of the equation so the underlying tech is showing how questionable it really is.

This type of research should be filed under “could be big” but at this point it’s closer to public relations for ad tech firms than “the sky is falling, become Amish.”

We need to collectively switch to Firefox and Brave.

Correction: they never injected ads, but they do remove ads and replace them with "attention tokens", which is an opt-out system for publishers. See also https://tech.hindustantimes.com/tech/news/this-popular-brows.... In short, please look into the questionable aspects of Brave.

Brave is perhaps the most ethically challenged browser out there. Hopefully they have stopped doing this, but they were injecting their own ads instead of what the publisher put on their website.

Is it Brave that uses BAT? You're being downvoted but weren't they collecting BAT for sites whose owners had never signed up, and presumably keeping it if the owner didn't claim it?

Right, that's what I was thinking of. They created an opt-out system for publishers.

The reader can guess how Brave expects to make money with a free browser that is handing out BAT :-)

This makes sense to me. If you make the system opt in, publishers aren't going to give a fuck because like 0.01% of their users browse on Brave. And that means users aren't protected because sites that were already okay selling their users out are still going to do it. So you have to make it opt out so that users are protected by default. Idk just seems like the incentives are nicely aligned here: the user-centric option also happens to be the one that benefits the browser vendor. Isn't that what we want?

I think some have issue with them collecting monies on behalf of an organisation who has made no indication they are interested in participating. If my site gets 1000 BAT in payments which sit there because I decided I don't want to take part, what happens if I say to Brave I never intend to take part? Do they give those BAT back? Site visitors may have had the impression they were paying for my content while Brave is pocketing it.

Personally I reserve judgement, but I see why some look at it this way.

I can see that. Perhaps there's social a fix somewhere around messaging or something. Ultimately it's not Brave that's doing this to site owners, it's the users who install Brave for Brave's feature set. I think the elephant here is that it's not really up to site owners to say whether users can use ad blockers or not or which browser they should use. If I install an ad blocker as a user, that's my choice. So installing Brave, essentially an add-blocker-as-a-browser doesn't seem like the thing that site owners should be able to opt out of at all. Allowing publishers to opt-out feels like a tactical compromise.

Of course sites can block requests based on Brave's user-agent string if their business depends deeply on ad revenue and they consider users with ad blockers to be abusing their service. That's their prerogative if this really irks them and it's worth losing the users. On the flip side if this becomes popular enough then site owners see real money on the table and they'll opt in to picking it up. That seems like an easy fix for them. If I was a site contemplating either blocking content to users with ad blockers or allowing cooperative users to opt into a more private client-side ad experience which still gives me the opportunity to collect revenue for their traffic, I'm pretty sure I'd choose the cooperative approach.

They never injected ads. If you enable Brave Rewards, you do get notification/pop over ads and rewarded a small amount of BAT, which can get redistributed to content creators or sites. If you don’t , all this is disabled.

Soon publishers will be allowed to run Brave ads on their website (and earn money), but users will be able to disable these ads anytime. So Brave is building a web experience where users can choose to enable ads or not (and if they do they will earn crypto).

I switched to FF years ago. I used to root my Android phones but I'm getting too old (read; jaded) to care now, Firefox means that with uBlock I _still_ never see ads on my phone. I pay for apps I want to use, and I don't use apps which don't have an alternative to ad-supported, with one exception being Twitter and "Promoted" tweets, which I can spot and skip in a blink.

If you want to have fun with Twitter ads, block the advertiser whenever you see them. After two weeks or so you start to get really.. different stuff.

I've been using twitter since 2009 and do this religiously, I have literally thousands of corporate accounts blocked. My ads at this point are incredibly niche, bordering on surreal. Sometimes I will retweet them just because they're so absurd.

Any highlights?

Can you elaborate? I'm not sure if this is something I would prefer to current ads, or something I would want to avoid.

Back when I used to use Twitter is found it funny. I used to get cranks and hustlers with the occasional “Why is this random personal tweet promoted?” experience mixed in.

Once you block out the "big advertisers", you get advertisements from smaller companies or even individuals.

That actually sounds like something is working within AdTech! How can that possibly be?

A good lie is based in truth!


(Not on Twitter, can't run the experiment myself.)

When you say "block", do you mean Twitter block the account which is the Advertiser?

Block the account whose tweet is being promoted, and/or the account promoting it (sometimes there will be tiny text saying "promoted by"). You'll never see tweets from that brand again, whether promoted or unpromoted.

After a few weeks of this you'll get some pretty strange tweets from really obscure brands. You will however still regularly see promoted tweets from financial services companies, because there are apparently an infinite number of them.

Yes, so that Twitter in their algorithm figures out that you can't consume this particular ad, it finds others. The more you block, the smaller the pool of advertisers is.

Nice, I have a new Twitter game to play, thanks.

For Android I can also recommend Bromite. It's Chromium-based with a built-in ad blocker and focus on anti-fingerprinting.


Thanks for the suggestion.

I currently see no reason to switch from FF which I use everywhere and benefit from Sync. Hopefully someone else will find it of interest.

Anyhing like this for ios that is chromium based

Same, but I use twitter mobile in FF and ad block. I'd prefer a subscription option to ads and an app that's all up in my shit.

I have a few apps that I just use the mobile web version. The bonus is that there is no app collecting gps and lists of installed apps and such, though, in theory, the new permissions help with that.

I finally made the total switch to FF last year and it's fine; the stuff got imported from Chrome correctly. Containers are a good bonus. I keep a copy of Chrome at hand but I used it maybe once a month or two to test some random website, that's all. There is no single reason to remain on Chrome for me, I just regret it took me so long. No crashes, no noticeable performance differences, no quirks. I'm very grateful Firefox exists.

Edge is a good alternative when you _need_ a chrome browser.

To Firefox only. Switching to Brave does not improve the Chromium engine hegemony.

Why does it matter if the underlying engine is shared amongst multiple browsers?

Because as it is now Chrome (read: Google) can dictate how the web works and they do. If you use a Chromium based browser everything just works and happy people doesnt often complain or see the problems. So using something Chromium based makes you part of the problem. Apple isn't much better though so there aren't many options left...

What prevents forks of chromium from sharing amongst each other? I don't think google can gatekeep to the extent you seem to be implying. I'm pretty sure Brave and Opera and MS Edge can share changes they've made amongst each other without going through the original repository first.

Nothing stops them sharing but they aren't big enough (and in some cases not interested in) changing the web standards.

Think about it like in this oversimplified/stupid example: A "fix" or change is implemented in Chromium. In the long run (intended or otherwise) it turns out it moves pixels slightly differently than the way Firefox does it. If almost all your visitors use Chrome you have to design your site to be perfect in chrome and you might do so in Firefox. Now you have sites that look as intended in Chrome but maybe look as it should in Firefox. This make Firefox users use Chrome more.

Now Brave et. al. is part of the problem, helping drive the only real competition out of market (and killing their only way out should they some day need to change engine).

In extremely complex code this is very hard to not be a part of and a company like Brave is way too small to fork Chromium for long if at all.

You underestimate how much of effort and money it takes to maintain fork of a browser. Chromium is project as big as Linux kernel (if not larger), but it's primary developed by Google emplyees so they have huge influence on direction of the project.

Because it creates a web that is not meant to work following a standard but to work on only one engine.

What this means is that this engine becomes the de facto standard of the web and this standard is controlled by the main contributor of the engine.

Every browser is now constrained by Google's own decision about what should the web be. Sure, they could technically disagree by forking WebKit/Blink, but since websites are made to work with Blink, a disagreement means being incompatible with such websites.

Arguably the engine SHOULD be the standard. The W3C and WHATWG have never been able to reliably document, much less enforce, standards. Add in ECMAScript variances and all the other newfangled web APIs and it's a losing proposition. The standards bodies never could keep up with the pace of innovation. Might as well let the code BE the standard.

That's already the way it works in the real world... the standards are irrelevant and ignored, only caniuse and browserslist actually matter. Like it or not, Blink is the new IE6, and its marketshare is only increasing.

Ideally it would be something not controlled by Google but by an independent third party (hand Blink over to Mozilla, deprecate Gecko?), but good luck with that.

Maybe this system wouldn't be as ideologically pure as building compatible renderers to a set standard, but it would result in far better developer and end-user experiences as the web quickly standardizes to a single renderer. The world simply does not need 10 different ways to display HTML with 90% compatibility.

Yes but its giving too much power to one actor only. What if Google decides to stop supporting an architecture, or decides that it's ok to have 8GB ram as a requirement ? What if they choose to implement a hardware accelerated feature that works only with NVIDIA LATEST-WHATEVER-AI-VR-HYPE ? What if they want to implement "crypto payment" as a standard and, oh, that's GDollars ?

Of course those are just random made up ideas but the point I want to make is that it's giving only one actor the power to define what the future of our only and sole international knowledge network will be.

Yeah, that's why ideally the engine itself would be open-source (which it is, though largely controlled by Google). I wish it were further controlled by a third party, kinda like ICANN or Mozilla, but that also subjects it to political capture.

The thing is, the existence of Gecko never actually meaningfully challenged corporate oligarchies. Mozilla's mission was noble but they were never particularly effective at it... web standards went from IE6 being the defacto standard to the Wild West for a while to Webkit dominance to a Blink/Webkit duopoly. There was never a period where we actually saw a standards-based web ecosystem. It was always renderer-based. In that sense, I'd argue the Gecko contributors (and Mozilla as a whole) would have more influence over the web ecosystem if they abandoned Gecko and focused on the Chromium/Blink project instead, especially if they had override/veto power over questionable commits from any one corporation. As it is, Gecko/Firefox is less than 5% of the web. You can't influence, much less set, any real standards when you're just a rounding error.

Like it or not, Chromium IS the standard. Only when Mozilla realizes that will they actually have a chance to succeed at their mission, instead of being the beloved but always-losing underdog...

Google already does this. Ask OpenPOWER or *BSD users about the fact their patches must live outside the Chromium tree and be merged in manually, for example. Mozilla has been much more friendly to niche systems as long as they don't impact higher-priority tiers; OS/2 survived in tree for literal years because it was self-contained and non-obtrusive.

Because if Google decide to make a user hostile change to the engine (Chromium), because everyone consolidated on one engine, now there will be no alternative. The only recourse would be to maintain patchsets or forks, which could be increasingly infeasible from an architectural standpoint, especially from smaller authors.

One way or another, browsers are heading towards engine homogeny (or hegemony), but Firefox and Safari are at least slowing this process down to some extent.

Chromium/Blink is an open source project (forked from Webkit, forked from KHTML).

Chromium/Blink is pretty much only maintained by Google. Google decides what goes in and what doesn't. It's a pretense of open source, except when it serves them.

That's often how open source works. It may not be what you want, but it's not just a pretense.

I think it can be. AOSP feels a lot like a pretense.

Actually there is now a ton of contribution from Microsoft as well.

How does switching to FF improve "Chromium engine hegemony?" FF is not Chromium based.

Seems they ment improve as in lessen the hegemony.

That's the point. Using a Chromium based browser just further cements Google's grasp on the web.

I'm biased, but a lot of users should also be using products that respect user privacy. We're building a dev-focused ad network called EthicalAds that does this: https://www.ethicalads.io/ -- but there's lots of other privacy-focused products out there we should be building and supporting.

I've been on Firefox for a few years now and it irritates me how super slow it is. Version 89 seems to be a bit better though.

I know it might be the extensions (I have 20+) but I seriously don't care. Chrome manages to be fast with the same set of extensions.

I don't like Google but Firefox's slowness is a real strain on my productivity and brain well-being. Hope they improve even more soon.

I am on an iMac Pro btw. Stuff like this should not ever happen on a workstation.

Not sure if this would help in your case but try to enable `gfx.webrender.all` or `layers.acceleration.force-enabled` in `about:config` if it's not already enabled.

Only the second one was disabled, enabled it. Thanks for the tip!

In that case it won't make a difference I think, `gfx.webrender.all` should override `layers.acceleration.force-enabled`.

Strange. I find Chrome to be annoying and slow.

What is it that is slow, are you on OS X?

Yes. Often times pages load very slowly, we're talking 5-8 seconds. I am on a gigabit connection (yes, that doesn't guarantee that my ISP has fast access to that particular page, I know) and Chrome is always loading those pages at least twice as fast.

Can't describe it perfectly. The UI is responsive but page loading is just severely slowed down -- not always but often. I have a bunch of privacy extensions but again, they don't seem to make Chrome sweat.

> Often times pages load very slowly, we're talking 5-8 seconds [...] I have a bunch of privacy extensions

If you are using uBlock Origin, you may want to see if un-checking "Uncloak canonical names" option in the "Settings" pane in the dashboard makes a difference.[1]

There have been reports of slow page load with some network configurations, and this has been linked to DNS lookup in uBO.[2]

Chromium-based browsers do not support CNAME-uncloaking, and so this would explain why the issue is not present in Google Chrome.

* * *

[1] https://github.com/gorhill/uBlock/wiki/Dashboard:-Settings#u...

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1694404

Super interesting, thank you! I immediately disabled it and will monitor if that improves my page loading speeds.

EDIT: pay no attention to the text below, I have misread the linked documentation. uBO isn't using external proxy for any network requests.

Not sure how much -- or at all -- you're involved with uBO. Your name does ring a bell though so I'd like to remark to you that making the users' browser use proxy is a step too far. It shouldn't automatically be enabled.

A privacy extension should do everything it could locally and stop there. If I one day figure it's not enough then I'll set a privacy VPN (or use an existing one).

I don't want that decision made for me on my own machine without my consent. :(

And apologies if my comment is misguided -- I only skimmed the linked page and I might have misunderstood.

> making the users' browser use proxy is a step too far

uBO does not do this, and nowhere is there any suggestion that uBO does this.

Users configure their own network settings, and it was found that in some cases when the browser is configured to go through a proxy (through either OS or browser settings), uBO's CNAME-uncloaking feature, which requires a call to the `dns.resolve()` API[1], would cause undue delay to page load. The root cause is outside uBO and outside the browser, it lies in the proxy.

* * *

[1] https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...

Then I have misread. My apologies. Editing the comment above so as not to mislead readers.

How did you end up at the conclusion that it is Firefox that is the problem and not one of the add-ons? Or maybe you didn't? Some add-ons can do less in Chromium based browsers than in Firefox. Are you sure this isn't what you are seeing instead? Like slower DNS because of add-on settings or bigger block lists?

FYI I'm asking, not doubting or blaming or whatever.

I have no clue if that's the case hence I am not blaming only the browser or only the extensions. It's most likely a combination of both.

But the fact remains that I installed 100% the same set of extensions on Chrome and it loads pages at least 2X faster.

I might be a programmer, I might care about putting a rod in Google's giant personal-info-gathering machine, and all that good stuff that makes us feel we're making a difference in the world -- but when 1/3 of all my pages load more slowly in Firefox, I can tolerate this only for so long.

So I don't really know which factor is the real page load speed detractor. I just wish the Firefox team fixes it.

Well, that's part of the issue. On macOS, all browsers are just skins over Safari. Google obviously has more resources to make this work better but I agree, overall I always found Chrome better than Firefox on macOS.

You can't really compare that to a native linux/windows experience.

You’re thinking of iOS. On macOS third party browsers can and do implement their own browser engines. Firefox is not running on WebKit.

On macOS all browsers are distinct engines, just like on Windows and Linux (yes, and on Open/Free BSD, Haiku, etc. I see you). It's on iOS that all browsers must be skins over Safari.

That certainly doesn't seem quite right, have you got any sources for this?

Wait what?! I thought this only applied to iOS / iPadOS.

You're telling me Firefox under macOS is using Safari's engine? If so, wow. Extremely disappointing.

No, the parent is incorrect. This is only true on iOS/iPadOS.

Been on FF for a few years now. FF89 is really nice.

Is Safari in the running?

Safari team is seriously understaffed, because Apple want people to use the App Store instead.

The EFF launched its site Am I FLoCed? [1] on April 9 for Chrome browser users to check whether they've been recruited/co-opted into Google's FLoC testing programme, started in March.

Not sure how it works, but the EFF doesn't seem totally confident that it can detect those affected: "This page will try to detect whether you've been made a guinea pig in Google's ad-tech experiment."

[1] https://amifloced.org/

FLoC has a well defined API like (all) standards: https://wicg.github.io/floc/

There's no reason why the website would not be reliable at detecting whether FLoC is enabled on your browser.

You can also used https://floc.glitch.me/ which was linked in one of the blog posts from google.

In the sense that you can call "side channel for ill-specified data Google wants to leak" an "API".

> Nonetheless, the ad industry — which co-opted foundational internet technologies like the cookie and the IP address into means of identifying people online

How can you have an Internet without using IP addresses? Do you just use Onion routing all the time?

VPNs (aka proxies) can also help. I think most ISPs (at least where I live) also use dynamic IPs which reduces the utility if IP for tracking somewhat.

I think cookie + browser fingerprinting is a much better way to track people in this situation, because it removes the uncertainty associated with dynamic IPs and multiple users behind a NAT.

In Sweden every single ISP I've used in the last 10+ years have had CG-NAT, I absolutely hate it because it stops me from self-hosting, but on the upside if the app/website is only using my IP-adress to track my location then I am apparently in one of the largest cities in Sweden, that is also 2+ hours away from me. However, most apps don't only use IP-addresses so it really doesn't balance out.

For you that don't live in small countries being 2 hours away in Sweden means you have to pass several other independent cities on your way there.

My home has a nominally dynamic ip (from Comcast), but I've only ever seen it change when the modem gets power cycled

Yeah I think different ISPs have different policies when it comes to that, I think Orange in France would cycle the address every couple of days regardless of power cycling, but here in Portugal it seems that my Vodafone public IP hasn't changed in months.

This is quite typical. When I was younger and needed my IP changed, I'd simply power cycle my router.

Newer ISPs use CGNAT so you'll be sharing your public IP with a few neighbours (7+you in the case of my ISP).

My last isp used to to cycle ip addresses at 1am every night. The modem would disconnect and reconnect every device on the network.

You can't. That's why IP addresses are a "foundational internet technology".

That's the question I ask every time I see the phrase "we receive your IP address". Of course you do, that's how the internet works.

IP addresses identify computers.

They only identify people when joined with other information.

Isn’t using Onion routing pretty much what Apple is proposing?

There are big differences.

Private relay is secure as long as Apple and the third party do not cooperate, but end to end flow correlation is much easier because streams are not isolated.

Onion routing is much more sophisticated than private relay: even end to end correlation is more difficult because of how virtual circuits are made.

I'm hoping for a sort of community revolt where most popular pieces of software that serve http, and most popular websites send back the "Permissions-Policy: interest-cohort=()" header as a default setting. I know Github did this, hoping others will follow.

Does that really do anything? The entire Permissions-Policy header confuses me. Isn't that telling the browser to lock the FLoC API for that domain. So no resources loaded from github.com won't be able to call the interest_cohort() function. But GitHub doesn't serve ads, so why would their scripts be using the function? And what's the point of declaring that scripts from your domain are not allowed to use the FLoC API (Or geolocation which is the only other policy I'm aware of.) versus just not putting the code in your scripts in the first place?

The EFF summarizes it this way:

"If you are a website owner, your site will automatically be included in FLoC calculations if it accesses the FLoC API or if Chrome detects that it serves ads."

Personally, I don't trust Google that much. Chrome knows which websites I've been to, so it could easily (accidentally, or on purpose) just include any site. Google also has a history of starting conservatively, then rolling out stuff a little at a time. "Boiling Frogs".

Rolling out the header everywhere seems like a good way to keep Google honest about it. Chrome can obviously still do whatever it wants, but it would be harder to explain for them if they shared info on an explicitly opted-out site visit.

It's also just a sort of ceremonial way of expressing dissent with the idea in general. In a way that people could collect statistics on and track.

> Does that really do anything?

It does very little; effort is better spent getting people off FLoCed browsers like Chrome.

More info: https://seirdy.one/2021/04/16/permissions-policy-floc-misinf...

This is what's really bad about FLoC; it's so hard to fight back on behalf of oblivious Chrome users who didn't opt out. For the uninformed, there's no winning move.

That post is helpful, but adding a global header across a site is typically very little effort. And there's nothing in there that says it's harmful.

And I'm unconvinced on this part:

"If your website does not include JS that calls document.interestCohort(), it will not leverage Google’s FLoC. Explicitly opting out will not change this."

I try to know everything running on my site. But especially with things like a deep npm dependency chain, I know not everyone knows everything that's running on their site. Or maybe chrome will interpret an image that happens to be an IAB size as an ad. I recall a certain storage related company recently running Google Analytics on an admin page, something the tech team didn't intend to happen. But shit happens.

I think it's worth putting up, both for whatever limited help it provides, as well as a visible vote against FloC.

If your stack is so deep that you're worried about serving malware to users for them to execute, you might want to put a warning on your site so users understand the risk before executing scripts.

The fact that the above sentence sounds unrealistic nowadays is extremely depressing. If malware distribution through web browsers wasn't already the norm, it'd look like common sense.

Aside from the stack, I also mentioned a marketing department doing something the tech department wasn't aware of. It happens.

If the tech department needs to protect users from the marketing department and can't approve or advise changes the marketing department does, then this is a good short-term solution. Fixing the organizational issues would be a good long-term solution.

I do admit that "a visible vote against FLoC" is a good reason to put this header; I've updated the article.

Diff: https://git.sr.ht/~seirdy/seirdy.one/commit/155d4f7b915f9f04...

I don't think these votes will sway Google but I do think they'll spread awareness. I still think that a better use of our time is getting users off Chrome.

Could a browser extension do this for every single request?

It could now, but Manifest V3 kills blocking/intercepting WebRequest, so not for much longer. Maybe Chrome will add some Floc APIs to the extension API?

No-one could have predicted this, he said with his eyes rolling back in his sockets.

Just as a reminder, Google is also planning to drastically improve privacy through a sandbox in Chrome: https://www.chromium.org/Home/chromium-privacy/privacy-sandb...

FLoC (and FLEDGE, and PARAKEET, and a million other bird proposals) is being used as a way to mitigate some of the loss that publishers and advertisers will see when those privacy measures are put into place.

FLoC (and FLEDGE, and PARAKEET, and a million other bird proposals) is being used as a way to mitigate some of the loss that publishers and advertisers will see when those privacy measures are put into place.

I think this illustrates that the whole bird-brained idea is to placate the advertisers so they won't run to congress, while continuing to allow Google to fingerprint people with its own, better, data, thus increasing its advertising advantage.

That's one way to look at it, and certainly a view that many share -- but isn't that more likely to attract regulatory scrutiny instead of deflect it?

I sit in on many of these W3C meetings (I'm not from Google) and the discussion is always "given that we want to achieve X definition of privacy (where X varies depending on the proposal), how can we mitigate some of the fallout that will happen". It's never "how can we defeat these privacy measures that are going to be put into place so we can keep the status quo".

You can argue that advertising is a net negative for the web or that it's evil or whatever you want, but if you frame it as "how can we take steps to make things more private for end users without completely destroying the way business on the web make money" then I think the current path is a reasonable one.

You cannot separate the morality of an action from it's usefulness to business, thats putting the horse before the cart.

Modifying your last sentence illustrates this nicely:

You can argue that slavery is a net negative for the world or that it's evil or whatever you want, but if you frame it as "how can we take steps to make things better for slaves without completely destroying the way business on the world make money" then I think the current path is a reasonable one.

Is there a version of Godwin's Law but for slavery comparisons?

This is a very loose definition of "privacy." It is more accurate to say that they plan to create a sandbox for privacy, to limit the amount of privacy you are allowed to have. As the article describes, all of these techniques are additional signals for tracking purposes.

I don't think that's an accurate representation at all. Look through the proposals and see, for instance, that it goes beyond anything that Firefox is planning to do.

Dropping third party cookies is just one small piece of it.

Google already knew this of course. There's been several papers over the years on uniquely identifying people using disparate sources. FLoC would be part of that.

For all the remarks here on how FLoC is bad for privacy here, i'm surprised that StackOverflow is missing the opportunity to discuss the larger issue of monetization methods in tech.

Ads by themselves aren't bad and neither is personalization. These methods of monetizing content have led to the huge surge of interest, talent and capital into Software Engineering companies and has arguably funded some of the most remarkable technological advancements of the past two decades or longer if you consider all the things that have come out of research projects in companies with primarily an advertising business model.

There are certainly valid criticisms of personalized advertising and personally, i believe there is too much concentration of power in the market.

However, i'd love to hear intelligent peoples perspectives here on how we preserve this wildly successful business model while tackling the abusive parts? If tomorrow the consequences of these restrictions by platform owners such as Apple or Google end up concentrating more of the power within the walls of the largest players, we're simply trading one set of problems for a much larger set.

Guess that guy who works on ads at Google will be along shortly to assure us that everything is fine again.

Am I missing something? Wasn't this the pitch for floc from day one?

> presented by Liveramp

Sorry, what is this referring to?

All Ad tech, spyware, and data collection firms/startups (and even startups collecting any PII data) with little care for the GDPR or the CCPA are extremely evil.

No regards for privacy at all, this rampant data harvesting and spying must stop now.

Why is it that all anti-Google posts just approach the first place on HN in such a short amount of time.

It's not anti-google, it's against a thing Google happens to be doing. Google holds an incredibly powerful position on the internet, so when they do something that could be bad it affects a lot of people. In this case, you could argue it affects pretty much everyone who uses the internet.

You don't get to be the biggest entity on the internet and keep the cutesy hacker facade, they've got an enormous responsibility to the community now.

Also sensationalist posts get more attention than nuanced non biased posts

Advertising and privacy issues shoot to the top; whether Google, Apple, Facebook, what have you.

Why do anti-google articles always have some one claiming unfairness or singling out as one of the first comments?

Because HN is full of people who care about the Internet and Google has been working to destroy the Internet for the past 15 years?

Same can be said for all manner of Google related posts. Not all of them are necessarily negative.

Many people don’t like being spied on.

36 points in 15 minutes is probably why...

I have often noticed that the mods alter the ranking of posts, but I don't think that they did in this case (they admitted doing it in some cases). The content on HN is very much curated/controlled.

The article says they "could" do it, not that they are.

"New thing not perfect!"

No, the article says they are doing it.

> As privacy and data ethics advocates warned, companies are starting to combine FLoC IDs with existing identifiable profile information, linking unique insights about people’s digital travels to what they already know about them, even before third-party cookie tracking could have revealed it.


> Advertising companies are already strategically gathering FLoC IDs and linking them to identifiable data or analyzing them in an attempt to uncover information about people that may not have been known before, mimicking how they have parsed what third-party cookies told them about people’s behaviors.

"starting to" "in an attempt to"

I see the submitted title has been altered anyway.

More like "New thing works exactly as .... advertized"

There is a fundamental tension between me wanting to walk around the world as a free human individuum and a large group of people who for some reason or another want to know exactly what kind of fart I prefer so that they may match me with the correct kind of fart-cushion so that I might buy it.

The idea that I might not want to have a fart cusion in the first place, and If I fancied the idea of getting one, I might go to the fart cusion store to find the perfect fart cushion does not seem to occur to these people - I am sure technically they have thought of the possibility, but they do not respect me or my boundaries.

What this "new thing" does, is manage my suspected fart-cusion preferences inside my browser, instead of some "cloud" to then tell fart-cusion-selling-enthusiasts that indeed, I might be one of these people with an interest in fart-cusions.

This "new thing" doesn't change anything about the fundamental issue that my thoughts and aspirations as a fart enthusiast are not the business of any moron who want's to market their newest fart-cushion invention to me.

I just had to give you a +1 for that.

Don't forget that there are people who want to find out if you are a fart enthusiast, so they can then use that to coerce you into "playing ball" with them.

Sort of the digital equivalent of the "$5 wrench[0]." Social media and adtech have been a freaking goldmine for spies.

[0] http://imgs.xkcd.com/comics/security.png

Honestly, who asked for FLoC and the MS alternative (edit: it's PARAKEET)? Why can't ad servers just follow my Do Not Track preferences? Serve me a random ad, or a context based ad if I have Do Not Track set... seems a lot simpler no?

Those using ad-blockers are going to use them anyway, but some assurance that I won't be tracked would go a long way towards me turning off ad-blockers on the sites I don't frequent, and even those that I do if they don't have a ad-free subscription. Those wanting personalized ads, can set the Do Not Track preferences accordingly, possibly by site.

Do Not Track failed because a) the worst sites would just ignore it and b) Microsoft tried to use it as a weapon against Google by setting it as default

Do Not Track failed because nothing actually enforced companies to respect it. It has to be law for any effect.

Obligatory "Microsoft Did Nothing Wrong" post:

Too busy to do a long post in detail, but short version is that advertiser's acceptance of DNT was entirely dependent on people not using it. If Microsoft had left it as opt-in but the majority of consumers had turned it on, we would have seen the same result.

You can see the same principle to explain the response to iOS's privacy changes, which are not opt-in or opt-out; they force the user to make a choice. The ad industry is still furious about this.

DNT could have only worked if no one used it, and that is not a privacy outcome that is worth pursuing or advocating for. It's not Microsoft's fault that DNT went away, Microsoft was just the excuse the advertising industry needed to avoid it. I don't think that DNT was ever anything other than an excuse the industry could use to keep tracking people and avoid legislation without changing anything. Microsoft didn't take it away from you, they just pulled back the curtain and showed you the truth about it.

A privacy standard that fails as soon as it's turned on by default is almost worthless and it really shouldn't be advocated for in the first place.

- https://news.ycombinator.com/item?id=26821972

- https://news.ycombinator.com/item?id=24294280

- https://news.ycombinator.com/item?id=24289186

- https://news.ycombinator.com/item?id=19483149

Microsoft has 57 screens about turning tracking on/off for Microsoft things when you install Windows, they're not against asking the user, except in that one case...

Doesn't matter, changes nothing.

It is a waste of time and energy to advocate for a privacy standard that can only ever be opt-in. Doesn't matter what Microsoft's motivations are, doesn't matter what they're doing elsewhere. A privacy standard that gets abandoned if it's ever defaulted to "on" for new users is worthless.

It's not Microsoft's fault that DNT went away, DNT was always going to go away the moment a sizable portion of the population started using it.

If Microsoft had asked users when starting up their computers whether they wanted to turn on DNT, and if a majority of users had turned it on, the advertising industry would still have had a meltdown and rejected the standard. We know this because that's exactly their reaction to iOS's changes, which are not turned on by default but force the user to make a deliberate choice when opening the app.

Changes a few things, you don't "know" the result of something that never happened, you suspect.

Also changes the perception that Microsoft were doing it to attack Google.

Once it's a standard you'd get it enacted in law that companies have to respect the setting.

> you don't "know" the result of something that never happened, you suspect.

I don't know that an asteroid isn't going to kill me in the next 3 minutes, but I can make some educated guesses about the world. My read of the situation is consistent with the way that the advertising industry has reacted in every single similar situation. It's consistent with the way that they're acting right now with tracking restrictions that require affirmative consent. There's a pattern here.

> Also changes the perception that Microsoft were doing it to attack Google.

Who cares?

> Once it's a standard you'd get it enacted in law that companies have to respect the setting.

If we're going to talk about suspicions, I have seen zero evidence that a DNT standard would have actually helped get legislation passed. Generally speaking, that theory does not line up with my experience of how lawmaking works.

Nor does getting it passed into law mean that advertisers wouldn't immediately lobby for it to be revoked as soon as any company turned it on by default. Heck, they'd try to force it to be opt-in instead of opt-out in the law itself if given half a chance. They would argue that Microsoft was being anti-competitive (exactly as you're arguing right now). Facebook is currently arguing that Apple's privacy controls are anti-competitive, there's no reason to believe they wouldn't make the same argument about Microsoft if DNT was turned on by default, regardless of what the law said.

A privacy standard that can not be turned on by default is not worth fighting for.

It's not a matter of who asked for it. It's a matter of sustaining income for many, many ad tech companies. They need to be making numbers that confirm how precise their ads hit the conversion sweet spot for specific audiences.

If they are not able to make those numbers, they fear becoming irrelevant.

These ad-tech companies are simply stuck and unable to innovate due to their current clients and promises.

Oh, no! This means I might still get relevant ads that relate to my unique interests as an individual instead of some least common denominator cookie cutter garbage ad.

Personally, I prefer my ads to be as unrelated as possible to my unique interests. That way I can ignore the unwanted distraction more easily, or even be amused by it.

Every time I see an ad that relates to my current unique interests I feel spied upon and concerned that my private life is leaking without my knowledge or control.

I personally love having my personal information taken without my permission, by code running on my device and wasting my battery, so that some company can show me ads I’m never going to click on.

This post is staggeringly under-informed. Just wait until you google for some ailment and your life insurance premium magically goes up.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact