The official description [https://github.com/WICG/floc], does a better job in explaining the point.
They try to cluster (="cohort") users interests and exchange that with the ad-service.
This could maybe help to increase transparency and authority over your data as it's saved locally.
But I don't see a way to limit the access to the users cohorts (they even say that themself, see link above). Everybody could access my interests - not just Google and other ad services.
And of course, if you have 1000 categories and some meta information (region based on IP address etc.), you will be able to track down individual users with pretty good accuracy.
To me it sound like just another layer of indirection with google right in the center of it.
Even if this method works well enough from an advertising perspective, i expect there will soon be adverserial models to deanonymize.
I don't think that solution works in the current environment, unfortunately.
But.. Ad networks will never implement this, cause priority there :
1) ad network 2) advertiser 3) publisher 4) user
This bumps user from 4th place to 1st place
I am not claiming this is good or bad, but clicks are not a good enough signal of efficacy for the vast majority of ads shown on the internet.
Go check out the messaging around Ad Choices and how poorly it ended up working.
This... just doesn't apply to the scheme described:
>> If I clicked the ad, thumbs up, if not thumbs down. With appropriate weights this can work.
That is, the (mis)use of experiments on browser users. "Field trials." This is enabled by the use of "updates". When users agree to updates they agree to let a corporation silently install and run new code on their computer at will, at any time.
This permits the company to create a situation where person A's browser is not quite the same program as person B's, there will be differences. Thus the corporation can run an "experiment". Both person A and person B might believe "I am using XYZ browser". The two users believe it is the same program. However there are differences. The differences can be added and removed through "automatic updates".
How do users maintain privacy in that situtation. The company behind XYZ browser can easily isolate groups of users with similar/different traits by conducting such experiments and observing user behaviour. "Cohorts". While the company may argue it is only testing software, there is an argument that it can also be testing users.
The words in parentheticals above can be defined and redfined any way you like. What is important is what the corporation is actually doing, not the label/name/terminology they assign to it.
(Disclosure: I work for Google, speaking only for myself.)
How would FLoC audience targeting work in non-chrome browsers? DV360 users deliver ads on all browsers, no?
Today, in browsers where third party cookies were removed without replacement, companies like Google that aren't willing to fingerprint have pretty limited user targeting capabilities.
(Not sure what you mean by "using known identifiers")
The Chrome proposal is that it won't: https://github.com/bslassey/ip-blindness
TOR is one solution here, which you could potentially also describe as "adding forced MitM to every connection". The proposals in https://github.com/bslassey/ip-blindness/blob/master/near_pa... and https://github.com/bslassey/ip-blindness/blob/master/willful... have different tradeoffs than TOR, with the "TOR is painfully slow" problem being a big one.
If you have better ideas, though, I would be very interested in reading them!
Looking at the corresponding TURTLEDOVE proposal, it's sending only a handful of the known categories to any given ad network at any given time. Floc also claims that:
> The collection of cohorts will be analyzed to ensure that cohorts are of sufficient size
- audio waveform generation
- access to gpu/webgl info
- have to somehow dramatically change or remove ICE/webrtc
- standardize 'feature flags' e.g. somehow backfill old browser so they all show support for new JS objects
- access to only a small set of fonts
- somehow make rendering completely the same across browsers or remove measurement/rendering to like 5px increments or something. e.g. bounding rect of (747744.888some two character specific font or some svgcss transform etc)
- testing for a ton of css extensions
- supported mime types
- a bunch of SVG things (i dont think this has been explored much i have a hunch there are some good targets)
- a bunch of latency hacks
> the problem with targeted advertising is the targeting
Is it? The problem with advertisement is the advertisement. I don't like to see ads at all, but one thing I know for sure, I'd take targeted ads at all time over random ads.
> People don't like seeing their web history funnel into their advertising.
No. This is the problem YOU have with it. Most of the non-tech people I know have no idea what you are even talking about.
For example, if I search for "how to do a Subaru oil change" it's the perfect opportunity for the search engine to show me ads related to motor oil, Subarus, car maintenance videos, etc... If I opt in to sharing my location with the search engine they could also show me ads from local repair shops and car dealers.
Later on when I'm reading an article about dog training, I don't want to see ads about fixing my car, show me ads related to dogs. Use the context of the page for targeting.
Your normal usage isn't dog training or subaru oil changes. It's idle nonsense like whether Kim Kardashian is angry with Courtney Cox or whatever. There's not enough to sell you that's contextual. People have an idealized vision of what they spend time on. It's nothing like this productive stuff you're talking about.
Next time you are in a waiting room flip through the magazines that are sitting there. The ads are targeted to the likely readers of those magazines.
Online targeted ads are way better.
Not to be annoying, but advertising is serious money. If you think you can do good contextual advertising you will become rich very easily. Anyone will. It's hard for me to believe that no one is doing this supposedly easy and effective thing well since all the incentives are there.
Sure, but there are a lot of ethical problems with them.
Imagine starting a service that paired individual shoppers with a passive handler to follow them around a shopping mall to build dossiers including:
* everything they pick up and look at
* what they eat in the food court
* the clothing styles and sizes they try on
* what their transportation to the mall was
* their race, gender, age, and apparent ethnicity
Tired analogies and ridiculous strawman arguments aside, Facebook and Google are valuable because they have a valuable product in their advertising technology. What and how you use it is an entirely different issue.
Where does contextual advertising/influencing switch from being helpful to being gaslighting?
My wife buys diapers online, then ads for diapers show up on my computer.
I go shopping for underwear on one device, and then when reading a technical forum with co-workers on a different device, there's ads with people just wearing underwear.
The tracking is extremely excessive.
Maybe I should be the change and write it myself
If someone is paying 10 usd for conversion and you have chance to convert 1%, another one has 90% chance to convert but paying lets say 0.10 usd, even if you have 90 times better ad, ad network will show you the 1% one.
Then we end up with crap like the cookie banner - which completely ruined the internet for me.
There is not a day I don't have to close one of those and browser extensions to block them are nowhere as good as blocking ads. Not to mention, accepting all the cookies is one click, while rejecting them either require 1 minute of thinking or browsing through popups and menus.
The fact that 99% of the people don't care is over their head.
If people cared they would be using duckduckgo.
Funnily enough, I use duckduckgo and I think it's a great Google replacement - but I don't care about being tracked, I just appreciate the features (especially code snippets in search)
It's not the content you want and the fundamental idea behind targeted ads is precise manipulation of people's behavior which is an extremely dangerous thing to have at scale.
I agree that it sounds crazy and counter intuitive, but if everyone could post geolocalized ads, it would feel better.
It means that Google has found out that among 1000 people, your browsing criteria with HTTP headers alone is unique enough to identify you with 95% accuracy, which is actually even more frightening.
This seems to work pretty well for me (Google got really confused, thinks I am on Windows now - I am not).
Just don't turn off media.gmp-widevinecdm if you want to keep watching DRM-protected content (e.g. Netflix)!
If it is the latter, please file a bug against Firefox to get it fixed.
Nope, because the order a Browser Engine loads assets is different in Chrome vs. Firefox vs. Edgium, too. Combine that with Firefox's messed up Accept header and you'll have them TOR Browser users, for sure. "@supports" in CSS is additionally a very unblockable way to track users, as it varies uniquely per-Browser-version as features of CSS get implemented and/or get fixed.
Usually traffic analysis for a client (with a specific ETag header) is enough to uniquely find out whether it's the exact same machine, hence that's what the header is made for.
The approach behind my browser tries to actively modify the contents of said malicious HTTP headers and to rewrite the HTML, CSS and other assets in order to force-cache everything and by laying off as much traffic as possible to surrounding peers.  But it's far from production-ready.
There's also a frightening amount of CSS features that can be used to track users very easily. @supports, @media, and a combination of <link media=""> and "srcset" attributes in a quick prototype was enough to track every client with around 98.3% accuracy, and I decided to not release the fingerprint.css project due to concerns how it might be abused in the wild.
Especially with unicode behaviour inside the CSS files themselves. CSS ident-tokens  are specified as "non-ASCII" so they can be emojis, too. And those have varying support across all Browser versions due to the ICU library being embedded in them (and being absolutely unique in every single subminor release I've tested so far).
Mixing in noise feels like a solution because it is hard for a layperson to see how a signal can be extracted, proof by "difficulty to me". If you mix in noise that changes averages then you are removing signal. If you add in random noise, each individual measurement deviates, but the limit of the average will be the same value pre-mixing.
Some browser vendors have started to remove the specific version from the User Agent string, for instance. Tor browser window is an actual square specifically to make that value (browser width & height) the same across all its users and improve their privacy (by making that value useless in finding uniqueness)
Hope that makes sense, sorry if I mis-explained some things
For example suppose my user agent string (and canvas fingerprint, accept header, etc) was different on every request. Would this be enough to stop ad networks from correlating each of my very-unique requests, and prevent them from tracking me across different pages?
Worst thing is, unless this is used by a very large chunk of the population, it would even be another tool to identify you.
How exactly? It seems like a difficult problem for a website to me.
2. They would have to track the noise over page requests to know that it was noise. The saving of and correlation of the saved state would be a pain.
Inconveniencing trackers is good for privacy, not bad.
Anyway, mods, here's a much better article that has more than one line of vague content:
Features used exclusively for tracking have no place among web standards, cohort-based or otherwise.
They want the browser to "discover" the users interests automatically during browsing. For a page to be excluded from this, the page author would have to set a new policy header.
And then your browser reports these interest to whatever tracker (for example Google) asks for the information.
Suddently Google can learn about what you're browsing even if GA is blocked.
Sure, cookies are used for tracking, but they are also used for authentication, which is something that nearly every webapp needs to do.
I just think that, due to articles like this, cookies end up being viewed as nothing but bad, when they are an important tool for the web when used properly.
More on-topic of the article:
this doesn't look like it really changes anything, to me. Like, so instead of cookies being used to track your data, they use a _browser extension_?? that is potentially even _more_ invasive. Sure, if it does what they say it will do, it kind of obfuscates your personal data. Really, what people want is just....less ads. Less targetted ads. This doesn't achieve that.
It's kinda silly that we can't manage our website logins via the browser without clearing all the cookies for a site.
What if we could move authentication or more specifically the state held in the client for authentication to some other mechanism? Could we pitch cookies? Could we make this switch without making it somehow possible for advertisers to switch to the new mechanism?
Unless you're going to throw out local storage and custom request headers, getting rid of cookies isn't really going to do anything except make the same thing less secure (since you won't be able to benefit from the HttpOnly flag).
The user clicks 'log in with google', their browser gets forwarded to whatever.google.com, the (now first party) cookie gets checked, then the user gets forwarded back to your site with the access token as a parameter in the GET request.
No third party cookies needed.
Typically the user agent will redirect to a Google/etc login page where it'll have access to first-party cookies. Then will redirect back to the site which requested authentication, passing state in the query params. It's only when you get into using stuff like Okta as a delegated authentication service do you run into trouble with 3rd party cookies.
Edit: As an example, I just logged into Stackoverflow using Google as the authenticator, with umatrix blocking 3rd-party cookies. Worked without a hitch.
I block third party cookies indiscriminately, and I never had issues logging into Facebook, Google, Twitter, or any Microsoft service except for Microsoft Teams. This includes logging into third-party services using the Google login.
If the site is essential I wipe the cookies before and after reading.
Most of the time I skip the site in favour of another one.
That's kind of the point of being logged in, to let the website know who you are.
Also there's no "privacy-friendly" tracking technology, it's an oxymoron an slick marketing/corporate strategy ( that works ).
"A flock name would essentially be a behavioral credit score: a tattoo on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate."
BTW, Chrome users have been part of this system for nearly a year now.
Either way, I wouldn't read too much into it.
I believe "FLoC" comes from the phrase "birds of a feather flock together", which is a decent metaphor for how the proposal works
(Disclosure: I work for Google, speaking only for myself)
And the privacy folks love their hyperbole
Not necessarily disagreeing with your actual point, but neither of the F's in EFF stands for "freedom".
It's hardly a big deal, I almost regret commenting about it.
- give them money
- offer them a political advantage over their opponents
- build up enough grassroots support among their constituents that not supporting your positions would be effectively career suicide
The best organizations utilize all three.
> because logic just doesn't work for tech
is more than a little ironic to me. I suppose it's like people saying you can't 'out-logic' a judge through a technicality, or that the law does not mean they are Perfect Laws of Logic; they're designed to be interpreted by judges.
I admire the EFF though, and I support what they advocate for. I was going to say I'd like them to be more pragmatic, but I suppose their hardline opinions are most of the reason they exist as an advocacy organization.
To your actual point, though, I think you've totally nailed it. I'm not sure if emotion and politics are permanently inextricable, but from my limited, unfortunate experience, it seems as though any kind of logical argument doesn't or can't (!) sway anyone (myself, of course, included!)
One false click and you're damned for weeks.
The OP says "The API exists as a browser extension within Google Chrome," which made me think it was a separate browser extension that I would not have if I had not chosen to install.
Are you saying it's built into Chrome instead? Cite? And if so is there any way to disable it?
I understand how first-party cookies are useful - you take a stateless protocol (HTTP) and make it aware of "sessions". And those in turn are a nifty block to build upon - login/authentication, "shopping cart", whatever...
But having one website to be able to save state that is only accessible to a chosen different site - what's the use for that?
I see this is as a tragedy of the commons problem: it's kind of nice to have, say, a counter of unread Disqus messages but the relative value of that compared to the use by tracking companies is hard to ignore.
Instead, I think we're going to recognize that this is too broad to be secured and either come up with ways to scope it down (e.g. requiring the third-party to have some sort of opt-in prompt) or that entire market category replaced with browser-controlled alternatives, which isn't great for companies other than Apple, Google, and maybe Microsoft but does have the appeal of not trusting an entity which the user isn't already trusting.
The original RFC that introduced cookies specifically said that third-party cookies aren't permitted. Then Netscape broke it. Then everyone else did. It's about time browsers become spec-compliant.
Google also happens to make the browser with by far the largest market share. So they're not going to axe third-party cookies as long as it drops their revenue 5x.
This is only "privacy friendly" because Google limits the accuracy of the fingerprint provided to advertisers by bucketing users into cohort groups. These groups are supposed to be large enough to prevent advertisers to identify individual users.
Google would still retain the ability to uniquely identify individual users.
This really only seems beneficial to Google, not to other advertisers, or to end-users.
In the proposal, the non-clustered data does not leave the user's device: https://github.com/WICG/floc
The other big thing that concerns me about this is how it still allows for some of the worst abuses. They are still going to possess entirely too much information about people and will continue to sell advertising that takes advantages of that information.
At the time I was laughed out of the room. Turns out I was just 24 years too early.
Tracking a group of 1,000 people to cater bad political ads isn't meaningfully better than targeting 1,000 individuals with bad political ads.
Targeted advertising needs to be treated like unfair gambling practices. Banned across the board, and the industry that remains needs to be heavily regulated and forced to be completely transparent about the process.
You say that like an absolute that is enforceable. Advertising has been targeted since advertising exists. Advertisers have been choosing radio or billboard-slots for well over 100 years (well, radio for 100, print and billboards probably for centuries), using data or educated guesses to reach a target demographic. As advertisers now choose on which sites (or not) their ads should appear, in order to reach their target demographic. Of course, they can choose by a few more criteria now. How ubiquitous should a specific ad be, so that it would not be "targeted" advertisement?
I think we need legislation, but it's not black and white. There's a huge grey line that spans all of advertising history.
Edit: just to be clear, choosing whether to advertise in a newspaper (and which) on radio (and which channel), or on facebook, is already targeting for a desired audience.
It's all user targeting.
We get into issues when we're dealing with ads being placed based on who is viewing the content.
It is possible to make it technically impossible. Not allowing third-party cookies is one way. This alone would only allow targeting by the coarse location derived from the IP and by the user agent string.
Cookies aren't required for that.
If I buy a MTB magazine, I do expect to see some ads from this or that bike-producing company (even though I'd prefer them not to be there).
Dear advertisers, I do not want to be herded in a bubble (designed by you or anyone else), I actually like to know the world around me.
(And this is even more valid for the things I'm not that familiar with anyway. How would I learn about those segments of reality, if not from your adverisment that you would prefer to rather not show me?)
This is gaslighting. Interest-based advertising on the web is not an immutable feature, a naturally occurring phenomenon. It's a scourge invented to further surveillance capitalism and it must be abolished.
All this is to say, I'd change your letter to say:
Stop tracking me or I'll block you entirely at every turn. Your business model does not concern me. My attention is not for sale. Change, or be regulated out of business.
Interest based advertising is simply optimizing ads for conversion rate. Slow down with all the philosophy.
Advertisers and trackers have been doing the same thing this thing is supposed to do for years. And where will they implement it? the only way would be at the application level, so every browser now also has to implement internal tracking services to aggregate all the data in their flocs, to then come back to the user to spice up their request? come on...
I'll keep supporting efforts to make the Internet a more privacy focused place. Advertisers have been buying TV ads for decades and I my TV hasn't asked me what I want to share with it, yet.
What makes Google think that Apple or Mozilla are going to add this to their browsers?
No but seriously, does being in a group of "thousands" of people really preserve privacy particularly well? It seems quite likely that with groups that small, membership itself could be considered privacy-compromising, e.g., a group of people that all have some medical condition.
At the most fundamental level, I feel like if you know which advertisments are targetted to me, and those advertisements are well-targeted, then my privacy has been invaded.
It seems to me there is a fundamental conflict between good targetted ads and protection of privacy.
True, but it is too much to ask Google to throw "the baby out with the bath water", as it were. For all their faults, I am encouraged that Android and Chrome, if no other product team at Google, is pioneering Federated Learning , Differential Privacy , and now are pushing ahead with Privacy Sandbox . I wholeheartedly agree it simply isn't enough, but it is substantially better than what's in-use right now.
Like everyone else though, I am worried for the same reason I dislike AMP (accelerated mobile pages) despite it bringing noticeably better user-experience for many: Google has this nasty tendency to make things seem more "open", "benevolent", and "private" than they really are.
No it isn't, the 'baby' of targetted adverts is a net negative for society. We'd be net better off if most googlers just retired and spent their days digging holes and filling them in again. It is not "too much to ask" that people stop messing up society even if it's making them billions.
If I was an advertiser I would have serious doubts about this, especially after the fact that ad spend on popular platforms have had no impact on many firm's bottom line.
I guess this is a response to all the pushbacks and dwindling PPC revenues from an increasingly wary advertisers who have quite possibly been duped into transferring their cash to Google & others over the decade.
Invasive targeting is only 20 years old, a blink in the history of advertising. If it was gone tomorrow these companies would just go back to targeting based on the ad placement rather than unique person viewing it. What we have now is the dystopian sci fi movie where ads shift as different people look at them.
If you don't think it's dystopian, consider that every ad your coworkers see when you're sharing your screen is based on the best targeting data advertisers can find. Your screen is disclosing your interests, wealth, medical history, kinks, etc to anyone looking at it. It's fucked up.
They are using privacy preserving techniques, and even if we assume they are doing it well, it just means that we will never get rid of the profiling and paying to get privacy will not happen with google services
There's few websites that break without them (e.g.: logging into Atlassian), and that's mostly due to bad design (given that every other login flow out there works fine).
Their main use has been to track people, hence, we don't really need them at all.
It's worse than that. Google is an advertising company that makes a browser (63.38% of browsers globally) and mobile operating system (72.48% of phones globally) to vertically integrate, controlling your privacy choices. They're also trying their hand at PC's (ChromeOS, 1.72% globally). They invent technology across the stack, providing software for free or paid, and open sourcing some to commoditise the technology and to starve competition. I'd be interested to see how many people use Gmail.
"Cookies are considered third-party data, or user data that's collected indirectly from users via browsers or websites."
This statement seriously requires qualification. This is exactly what contributes to unreasonable regulation and confused users.
Attributions is a big pain too.
Without those two things this simply makes google more valuable while killing everyone else who doesn't have their own browser which tracks everything from your login, analytics on basically every website, and more.
The proposed system deals with large groups and machine learning. It requires a browser ad on or changes to the browser. This is not approachable for startups, small businesses, or those who are independent. It's targeted at Google and further helps solidify their position.
As people want to cut Google off from constantly monitoring them they are looking for ways to work around being cut off to keep the data flowing. Branding and marketing their work to make people want it.
So NYT rediscovers publishing? Once upon a time, publishers had teams of sales people who had relationships with companies needing to advertise and coordinated theirs ads with the publishing schedule. Publishers then gave that advantage away to sell ads for a fraction of their current price on a per view basis and have been crying ever since.
Companies like Google make billions on ads. A overwhelming majority of Googles income is from ads. It isn't just that Google served new markets or that publishers outsourced the work.
Ad systems use a bidding system. Google controls both sides of the bidding system. The system is setup in a way where Google has benefited more than others.
It reminds me of record labels and producers. They make the lions share of the money on record sales for most albums and music. The artists typically get a small share. Some artists have walk away with a medium income will selling millions of albums and having the label/producers making millions.
The lower income to publishers has caused them to do more shock and awe type articles that aren't good for us. They pay more inexperienced people less so there is less mentoring. Overall it means the quality of the published stuff has gone downhill.
anatomized? Is that a typo for anonymized? Or does this mean something?
Instead of the user disabling third-party cookies, every single page author would have to set a new HTTP header to have their page excluded from the machine learning.
It also looks like a massive GDPR pitfall.
Say you track conversions to a campaign, and track what "cohorts" a user entered your sales pipeline through. The moment you connect this data to the customer, it's personal data IMHO. If you operate in Europe, the user should be able to retrieve/delete the data, and request it changed if they say it's wrong.
I get full access to everything you do online, and get to do anything I want with the information. Perhaps I'll use it to maybe target ads slightly better in some cases, and put myself into every value chain you're involved with so I can get a cut at every step.
Oh, and I'll do my best to move all computing online, so that 'everything you do online' equals 'everything you do with a computer'.
In exchange you'll get a web browser that is at times more performant than the others. Hell, I'll even throw in a free email account (where I can gather all the best bits of info)!
It's a pretty good deal, don't you think?"
https://www.pcworld.com/article/116657/article.html#drr-cont... (250mb free increase was suspected to be a response to Gmail)
Whenever I visit any website, courtesy the GDPR laws, we are asked to request the terms and cookies. I make it a point to disable all cookies (barring the strictly necessary ones), partners and also the "Legitimate Interest" section, where I click "Object all", and then click "Save and exit".
However, on many websites I don't see any option to "reject" or "object" to cookies, partners, vendors and especially legitimate interest. Particularly concerned about Legitimate Interest since the number of vendors there is humongous.A good example of a site where we cannot choose would be the BBC. We get an option only to read their terms and conditions but no option to reject and object.
1) Can anyone please guide how to reject to cookies on such sites where they don't have a reject option present?
Also, in my iOS, in Safari settings, I have chosen "Block all cookies" to yes.
2) How far will blocking all cookies safeguard me from unscrupulous cookies? If my blocking all cookies is enabled in safari settings and suppose I visit some malicious site and accept their cookies, would the owners of malicious site be able to do anything sinister or adversarial to my privacy and integrity? Will the be able to breach my security?
But wouldn’t blocking JS make all websites dysfunctional for me?
Also, how do I get rid of persistent storage like indexDB as you have highlighted?
Does clearing cache and cookies from browser help?
Clearing the cache and storage should be enough.
I know how to clear cache but may I ask what do you mean by storage?
I don’t know how to delete that.
Now Google want to offload Machine Learning to our browser.
That will be bad for battery life, electricity bills, and the environment.
On the other hand I use free software. So I can make a version of their extension that just claims that I am obsessed with Ironing. That will also make it easier for the Ad-blocker to do its filtering.
It is not a replacement. It is a proposal how to replace a free, standardized and open world wide web feature with a commercial service.
Say we had an Open, standardized, human-readable list of categories/groups that people could opt into (rather than a bunch of on-the-fly groupings determined by an AI). We could give users the ability to choose 0-X of those categories that they want to associate with. We could even let them choose on a site-by-site basis, so they could decide how ads would be targeted (or if they would be targeted at all) on parts of the web.
We could build UIs that helped them with that. We could have easy ways to opt into or out of categories. We could allow them to turn on category suggestions, so with their permission if a user visited a site about a specific kind of product, we could show a one-click option in the browser to add themselves to an associated category and see ads for similar products.
We could allow them to group sites together and say things like, "I want news sites that I visit to know that I'm looking to buy a specific brand of car, but I don't want any of the car dealership sites that I'm looking at to know what brand I want."
For users that don't want that level of detail, we could still have a 'smart' system that consumers could run (clientside) that looked at the websites they visited, or even more personal data, and auto-placed them in categories without them needing to think about the system at all. They'd just need to select an option to let the browser handle all of their categories for them.
But importantly, all of this would be based on consent. And instead of offering users a single choice to opt out, they would have an entire spectrum of choices that allowed them to decide how they presented themselves online, what specific data they shared, and who they shared it with.
If users genuinely benefit from targeted ads, then they'll opt into the system and pick categories that are relevant to them and send them to sites. If they think Google's data collection is accurate, then they'll turn on the smart system in Chrome that locally categorizes them. But at any point, for any site, they could choose to turn off the data entirely, or to add themselves to a specific category, or to remove themselves from a specific category. In human-understandable terms, they would know exactly what data they were transmitting to websites.
For all that Google says they're working on data privacy, very few of their proposals, even their good proposals, approach privacy from an angle of giving users more control over their identities. Google is still stuck in a world where they think of data collection as something that has to happen without the users knowledge, without the user's ability to easily inspect what's going on, without the user's ability to form multiple identities or even to just opt-into the system at all.
What I want is control over my data. And what Google (and companies like them) keep on saying is, "we'll be somewhat more responsible with your data, but only if we keep control of it."
And this represents a general attitude that comes up in so many modern tech products, from Youtube, to social feeds, to modern UI design, to device security. These companies are like a controlling, overbearing parent. People want agency over their ads/recommendations/feeds/etc, but the companies think the problem is that they're just not good enough at controlling all of that for us. It's a way of thinking about UX/product/process that's divorced from user consent and agency as an ideals that we should strive towards.
But this this isn't how ads work. Most ads are served by an ad network, such as doubleclick, which does realtime bidding on the ad space. The exchange of cookies is between you and the ad network, not you and the publisher hosting the ad or you and the advertiser who placed the ad.
Otherwise you make some great points and it seems that FLoC could well provide such tools for the user, because their profile is now rich and client-side instead of being stored on some tracker server and keyed by an opaque cookie id.
But my browser could be smart enough to only exchange those cookies (or whatever data format we want to use) based on the current origin I'm visiting. There's no rule that requires me to use the same doubleclick session on CNN and Reddit.
If I visit a website that's classified as a news site, at that point doubleclick wants to know information about me so it can hold an auction to show me targeted ads. At that point, the browser could send doubleclick any information -- it could be based on the current origin, it could be based on the time of day, it could be based on what profile/identity I have manually switched on/off.
The point I'm getting at is the model of "you have one persistent identity that doubleclick can hook into anywhere" is already broken. Part of choosing how we present ourselves online includes the freedom to have multiple identities and to choose when/how those identities are revealed.
What is the public understanding/perception of cookies? The past couple of years since the implementation of the GDPR has probably been the biggest and weirdest public education campaign (done entirely through brief pseudo-consent popups).
Don't fall for it. Break up with Google. They are abusive.
In programming and in business you talking about "finding" a solution to a problem all the time. In this case, the problem is how to improve privacy without advertising revenue dropping off a cliff. And it's not like the solution is staring you in the face -- it takes iteration and testing for it to be "found".
So I don't think Google is being disingenuous here. Nothing is being implied as a somehow natural phenomenon. Business in general is about "finding" satisfactory solutions to problems day in and day out. "Find" and "develop" are essentially synonymous and interchangeable here.
It's clear that Google sees a threat to their business model, and they'll use any PR-friendly language they can to convince people that they're addressing the user concerns. Just like they did in 2010, when they ascribed their willful malfeasance to a "rogue engineer" who they then put in charge of StreetView.
If I'm "reading too much into it" it's because we collectively haven't been reading enough into it for the past 15 or so years, and in that time our Overton window has shifted too far.
But I'm absolutely not fine with the wording. When confronted with the phrase "Google says it may have found a privacy-friendly substitute to cookies", I'd wager most people would think "this improves my privacy in a general way".
It doesn't. What it actually means is: Google will make it more difficult for others to track you, but Google will remain committed to tracking everything about you that it possibly can. Except silently, and without your ability to opt out using ad blockers etc.
Net result: even less control over your privacy, and Google further entrenches its monopoly to boot.
That's not an improvement in privacy.
Okay, that's obviously not true from your own comment, you also say:
> Google will make it more difficult for others to track you
So if Google can still track me at the same level, but others can no longer track me, that IS an improvement in privacy. Not from Google, but from everyone else.
I get and agree with your point about how it's an advantage for Google because they can make their tracking harder to avoid and lock others out, but I find it hard to sell "This is not an improvement in privacy" as being unequivocally true.
So, again, this is not an improvement for users privacy.
Not to mention that while adblockers currently do not do anything against this practice, this does not mean that adblockers can never come up with anything to block this tracking.
Also others may no longer track you at the moment but they definitely will do in the future.
So... Google breached a bit deeper in our privacy and paved the way for others to follow them. I can't help seeing them so evil.
This is only true if Google does not sell the gathered information to others. If they sell the data, the net privacy gain is nil.
“Selling personal data” — as if your particular affinity for left handed baseball gloves were of special interest to large corporations — is a red herring. Let’s stop perpetuating it.
At the time, "Engineer put in feature not asked for."
Later, "Upon full examination, engineer put description of feature in piece of paper shoved in front of busy manager, and told selected co-workers what he had done." (None of whom, when the shit hit the fan, should be expected to stick their necks out.)
Neither version suggests that the feature was something reflective of corporate policy, or would have had support from higher ups if they knew about it. Also, said engineer turns out to be a very good programmer. Which explains the company's decision to try to keep him and correct his behavior rather than immediately firing him.
Google specializes in automation at scale, not lovingly handcrafted data.
I emphatically believe the opposite. Data collection, storage, and manipulation is ever becoming easier. The only actual choice is between a society where we're lied to about surveillance, or one where surveillance is generally available. https://www.amazon.com/Transparent-Society-Technology-Betwee... laid out the case for this over 20 years ago.
Here are the realistic choices.
On the one hand, we can create any set of rules we want on paper. We can get governments to officially support it. We can be frustrated as those same governments do it ineffectively. And then watch as the rules meant to curtail monopolies get caught by regulatory capture and are manipulated to support the very organizations that they are theoretically supposed to punish. (Seriously, do you expect any secret service to not take advantage of what is possible? Have you heard of Snowden?)
Or we can choose the path recommended in https://www.amazon.com/Transparent-Society-Technology-Betwee..., accept that surveillance is real. And put the tools in the hands of the masses. This is already happening. See https://asherkaye.medium.com/do-you-know-this-man-7836e54abc... for a story of how a random person in a random photo was tracked down by an internet stranger using reverse image search with facial recognition. And the tools are only getting better and harder to stop over time.
I personally hate both futures. But I hate the first one more. And I see people like you as unwitting pawns who are creating the first of those two futures. And your unwillingness to understand how things actually work, combined with your certainty that you've got the moral high ground, makes you an easily manipulated true believer.
Enjoy your certainty that you're in the right here. I guarantee that you'll have a lot to be upset about in the way that our world is shaping up.
The idea that the surveillance tools are "put in the hands of masses" neglects the part where the "masses" includes corporations that do it better, because they have the ability to pay handsomely thousands of people to make it effective. So rather than accept the defeatist position -- "We are powerless to stop technology," even though surveillance capitalism is a choice, and one that we don't have to accept -- we can choose meaningful laws which restrict those actions. We can choose meaningful laws which change the economic imperatives so that corporations don't profit from tracking and shaping human behavior. Will it be perfect? Absolutely not. Is defeat inevitable? Perhaps. But sitting on the sidelines and choosing not to shape that future because of some sense of foregone inevitability.... that cannot be the future, unless we believe that those now with the ability to shape it (and someone will -- Google, Facebook, or someone else) deserve to have that right without being challenged.
I cannot accept that. I will not be gaslit into believing that I'm 'too concerned' or 'reading too much into it'. This does not have to be our future.
Not much concerned with descriptions, so what's your prescription:
What do you believe to be satisfactory wording in this case?
Ah, the 'ole "right to my business model" attitude. Not blaming you in particular for this, but it's pervasive. Needless to say, I thoroughly and utterly disagree than any business has a right to any business model, particularly one that robs me of my time, attention, and computational resources.
Let's take a look at the "Now Playing" architecture available on Pixel devices.
At first glance by a critic you think "You're crazy for giving Google permission to have your microphone always on and listening for songs you're hearing, privacy this privacy that".
If you read into it, you'll be comforted to know they've built a model to generate signatures clientside which are able to be compared on-device to a list of signatures which are similar to it. Then as far as I understand, they are able to take signatures which contain no discernable audio data and use those to discover new audio trends.
> On Pixel 4 and later phones, the counts of songs recognized are aggregated using a privacy-preserving technology called federated analytics. This will be used to improve Now Playing's song database so it will recognize what’s playing more often. Google can never see what songs you listen to, just the most popular songs in different regions.
Privacy-preserving, user-beneficial, and useful for advertising targeting if you haven't opted out of interest based ads.
But once you have given them access to your microphone, you have to trust that their software does what they say it does, without mistakes or bugs (whether in design or implementation) or accidental security vulnerabilities (possibly maliciously introduced by the NSA or who knows).
If you do not give them access to your microphone (assuming the OS access controls are themselves working; but that's a much smaller attack area), you do not need to understand trust anything.
They are a company that will only pay attention to privacy when forced to by an existential threat. It just isn't in their company DNA to care about user privacy. They aren't the customers.
So, OP is likely mistaken in their comment.
Note to readers: This is false.
EDIT: If you're downvoting, please provide evidence. There is a lot of misinformation out there, and OPs post increases it.
EDIT #2: Here is Rishi Chandra, GM of Nest: "Putting a microphone on a thermostat, I actually don't think makes any sense"
It was in their security hub which is perhaps better or worse than the thermostat depending on your view.
And, I just didn't HN readers to think there was a mic on the thermostat, so I was correcting that.
Remember when Google sent hundreds if not thousands of cars all around the world and 'accidentally' hoovered up massive amounts of information?
I'm sure it was all an innocent mistake. Google are certainly worthy of our trust! /s
I don't, not in this case. The only thing that FLoC seems to change is how data is aggregated and how buckets are determined. But fundamentally, the idea of taking users, putting them into a box based on their normal browsing habits behind the scenes, and then broadcasting that box and associated data to every website they visit -- that's just not a private model.
What Google doesn't seem to understand (or chooses not to understand) is that the end result of bucketing users and sharing data about them behind the scenes while they browse is the part that many people object to. So Google keeps on trying to come up with systems that allow them to serve different content to people and to collect demographic info based on variables and processes outside of users' control -- but to somehow do it in a way that is magically not a problem.
But it's like trying to create a 'nice' mugging. It's not just the methods I'm opposed to, it's also the end goal.
FLoC still doesn't give users control over how they present themselves on the web. And part of privacy -- part of the reason I care about privacy in the first place -- is because people should have control over how they present themselves on the web. There are tools Google could build if they wanted to go in that direction, but FLoC remains an opaque system that runs in the background that collects data about you and sends it to every website that you visit. That's not a private system, regardless of how the data is collected. It's not designed to be transparent, it's not designed around user consent.
Honestly, it shouldn't even be an opt-in/opt-out system. Why can't I choose what buckets I belong to? Google isn't thinking deeply about user choice, they're not even being remotely imaginative about how they could give users more power over what ads are shown to them. They're still stuck in a mindset of "this needs to happen behind the scenes outside of your control where you don't know what we think about you. And we'll let you opt out of the entire system purely because we're forced to. But nothing else!"
That’s been on my mind a lot lately, so much that I wrote a thing about it: https://kronopath.net/blog/segmented-identity-as-necessary-f...
If google does this, and other browsers don't, it would mean that just using another browser like Firefox will effectively hide you. Hoping for that.
HN: Don't be tricked, Google is evil.
Like I understand WHY there's a hate boner for Google. I just don't understand why people think it's bad for them to acknowledge the preferences of their users and make decisions accordingly.
There's a significant amount of gaslighting to claiming that you value user privacy whilst developing new ways to track users.
If companies could anonymously track users, and still maintain the marketing backbone of the internet I think most people would be fine with it -- in fact, prefer it.
If users had to go to a setting to turn on targeted ads, what percentage of them do you think would do it? I suspect it would be pretty low. I wonder if most people would even notice that the setting had been turned off?
We use the opt out model all the time to justify why users don't actually care about tracking -- we say that they'd opt out if they did care. But I feel like we all mostly know that an opt in system would also not see much use (that's the reason why ad networks are so opposed to them), and I don't know why we don't consider that to be evidence that consumers probably don't value targeted ads very much at all.
I believe a a sizable portion would. They like the targeted offers and ads. Maybe because they enjoy the feel of something being catered to them, maybe because they are addicted to shopping/consumerists. IDK.
> If users had to go to a setting to turn on targeted ads, what percentage of them do you think would do it? I suspect it would be pretty low. I wonder if most people would even notice that the setting had been turned off?
I think this is a really good question. The power of opt-in vs. opt-out, as you noted.
However, I don't know if we can conclude whether they value it or not solely from their willingness to opt-in. We really have to account for how the ability to opt-in is exposed. If we showed it on every size (akin to the cookie accept craze of today), we'd see a lot of people opt-in. If it were hidden in a chrome settings, far less just because that's mentally off limits for many, and easily forgettable.
I totally agree with you on somewhat sinister motivation of opt-out over opt-out patterns.
If users are given the option in clear terms, most users will turn off tracking. Facebook knows this, it's why they are so pissed off at Apple and have taken such an aggressive public stance against Apple. Google knows it, it's why they haven't published an update to any of the iPhone apps since Apple started requiring their apps report what end user data they collect.
Google and Facebook are sure... not sure anyone else is more qualified on this.
> If companies could anonymously track users, and still maintain the marketing backbone of the internet I think most people would be fine with it -- in fact, prefer it.
If this were true, why doesn't Google, Facebook, and others give us straight-forward ways to opt out? If people would prefer it, why exactly is Facebook trying so damned hard to prevent Apple from giving people a simple opt out?
I don't think anyone expected them to just flip the switch and do that without a reasonable (maybe to just them?) alternative I can say, the idea of using 'cohorts' as discussed by this FLOC approach, from what I can tell, is positive progress. Is it far enough? Perhaps not.
> why exactly is Facebook trying so damned hard to prevent Apple from giving people a simple opt out?
Good question. I am not aware of that issue.
I also Question Apple as they take payment from Google to the tune of billions of dollars for search, pushing 'beacons' etc, while promoting themselves a bastion of privacy and security. None of it is as simple as it seems.
No more than anyone expects a heroin addict to stop cold turkey. The problem is Google isn't stopping or giving people the option to opt out, they are just changing tactics slightly.
This also doesn't really talk about how this data gets integrated into the rest of the profile Google has built and will continue to build on users (without their permission) based on their search history, mapping, email, etc.
Did you accidently clip the rest of your message?
How often do consumers even get asked? My webmail provider seems to have no issues providing both paid and ad supported. Other services just pulled the paid plan from under my feet. Whats App with its new terms and conditions once had a small yearly fee, Facebook dropped it. User choice? certainly not mine.
> If companies could anonymously track users
That is like trying to identify a suspect using a smiley face. If they track you it isn't anonymous.
> and still maintain the marketing backbone of the internet I think most people would be fine with it
Why do we need targeted ads? Websites usually have topics they are focused on, is it wrong to show car ads on a page for car enthusiasts? On a news story showing a newly released car?
Purely anecdotal that I am drawing from -- I've had this discussion with quite a few non-tech folks over the last few years privacy/tracking has hit the zeitgeist.
Many dismissively state something like, "I know. Don't care. Means stuffs free right?", or "I'm not doing anything wrong, I don't care".
> That is like trying to identify a suspect using a smiley face. If they track you it isn't anonymous
By that I mean regulations around what they track, identifiable data, not being able to explicitly say User2021 === Josefx on the system. I think this is why Google is going with the 'cohorts' in their FloC approach.
> Why do we need targeted ads?
Good question. "Need", probably not. But if I am on facebook, and ads are going to happen, do I want highest bidder ads like "Find Hot Milks in your Area Now" interspersed between my feed's family baby photos or an add for "World's best Uncle" T-Shirts? There's a happy medium somewhere.
I'm as pro-privacy as they come, but until someone comes with the incentive or dedication to build an alternate payment ecosystem out of nowhere, ads are what the web is built on.
Aside from that. Why on earth should we trust Google will limit their collection to this one method? They've lied over and over about what they collect. Been caught multiple times breaking laws to collect information only to say "Oops, it was a rough engineer". They've been caught bypassing the no-tracking flag in browsers. They've been caught abusing location data after users disabled it.
As the saying goes: Burn me once, shame on you. Burn me 750 times, why in the fuck am I still using Google?
They said they "found" something. I see similarities.