Privacy Analysis of FLoC - https://news.ycombinator.com/item?id=27463794 - June 2021 (2 comments)
Not saying this is ok, but this was the plan from the beginning, being able to offload responsability far from Google. This article actually shows what it's like for an advertiser.
EDIT: Did not finished the article before posting, they actually talk about cafemedia's work.
FLoC is NOT proven to be incentive-compatible with consumers and how they value their privacy. The only guarantee is that users are (on average) harder to distinguish within a cohort. Google absolutely studied the possible economic consequences of FLoC prior to announcement and they are hiding that study. Either because the results are crappy, the Google Ads employees are less competent than they were a decade ago, or both.
Could you walk through your ideas of the incentives here? I'm curious because while I like the idea of FLoC in general, Google is the last company I trust with this. Moreover there are a lot of details (such as cohort sizes) that could have the potential to mask identity and align incentives, but has been underspecified by Google so far.
In simple terms, it states that the mechanism that (say) Google offers to the consumer is such that the consumer finds it optimal to act according to their true preferences. In so called direct mechanisms, this means that the consumer would prefer to reveal their true type (information about themselves), rather than trying to fake being someone else. If this is true, it is much simpler to build mechanisms that are robust and achieve good outcomes.
For example, Hotel booking websites are typically not incentive compatible if they offer different prices to different users. Some user who is known to accept high prices will be offered higher prices in the future.
However, if this tracking is imperfect, the price mechanism is not incentive compatible. The consumer would prefer to pretend to have a lower willingness to pay, and would thus be offered the same hotel for a lower price.
On the other hand, bundled goods (like cars accessory packages) may be incentive compatible, because even though each package is somewhat imperfect for each customer, they still prefer the "their" package over another one that is less costly. That way, the car company can maximize profits.
In mechanism design (for example, in the design of auctions), incentive compatibility is usually a requirement to derive mechanisms to that maximize the objective function. This is so, because there are of course infinitely many complex mechanisms one could propose. But, under some conditions, incentive compatibility ensures that the "optimal" mechanism will have a simple structure.
Finally, there are substantive reasons: If a customer has to act against their true preferences, the system is likely unstable and gameable. And also, people might walk away.
Some people also consider the participation constraint a part of incentive compatibility.
That is, the consumer must at least gain as much as not participating at all.
In the present case, the participation constraint is violated. The consumer gains nothing by being targeted and, if asked, would rather opt out. In that case, Google can design whatever they want, an informed customer will reject it.
Google and Ad firms know this, which is why it is virtually impossible to truly opt out. The participation constraint is knocked away by deceptive practices and dark patterns.
For the majority of users, I do believe tracking has some upshot, at least for reducing ad spam and ad fraud. It's really hard to prove this without a counterfactual study, but if you compare Facebook ads with the crap on news sites today, I think that situation is illustrative of a place where Facebook's improved targeting saves the user from a lot of spam / fraud. And I'd argue that outcome is a result of Facebook having a monopoly over their own targeting and identifiers.
Any new ad provider (one that does not have 20 years of user data like Google, or 15 years like Facebook) will not be able to fingerprint as well with FLoC as they could with cookies. Existing FANG competitors like Verizon, AT&T, Comcast etc have lots of historical data offer and even some FLoC-like products but if Google's FLoC replaces cookies then it's harder to derive value from, say, Verizon's FLoC-- simply because a cookie is a more effective fingerprint than a Google FLoC.
So FLoC is anti-competitive and just further supplants Google's monopoly. Does that really impact consumer privacy? Well, if the ad market is less competitive, the level of privacy that the market can support will be less consistent. There will be people like Apple who make so much money off hardware that for ads they're fine with the unobtrusive choice of selling Google default search engine rights (versus, say, the loads of crapware that can be found in some Android distributions). But there will be lots of smaller ad players who will get more desperate for ad spam and more shady ways to target you with what little data they can get. For smaller properties, perhaps Google FLoC is so useless for targeting and lift that everybody switches to requiring a sign-up with a phone number SMS confirmation.
_Can_ FLoC be incentive compatible? What are the actual likely economic consequences? Google knows this (or they think they know), hence FLoC got launch approval. But Google won't share those details with you. They want you to think FLoC is just great for privacy. This is how Sundar addresses his position that "we need to work hard on user trust."
The proper headline should be "Ad tech films test FLoC".
As a fingerprinting surface FLoC has similar properties to the Battery Status API -- not stable for the same user over long intervals, but can be used to help match pageviews from different domains that were close in time.
But only the big sites like google have enough users to birthday-paradox their way into a meaningful ID graph, so you're safe from that tiny ad startup that also happens to be threatening google's business model...
Looks like to disbable it in Chrome... you have to find a deeply nested config and then tell chrome you want to "disable privacy-preserving features". !?!?
I can do this. But of course nobody else is, it's telling you are disabling privacy-preserving features! Its' making me kind of livid.
If you want privacy, stay on Firefox.
Firefox unfortunately is becoming harder to use with every release.
really, if you are in a point where you hunt for hidden settings to disable in your browser? why not just use firefox?
You mean to tell me that FLoC will be used for fingerprinting anyway, and it changes nothing about advertiser's strategies and tracking techniques, and they won't self-regulate, and that it doesn't work to throw them bones of extra data and hope that they'll willingly stop their abusive behavior if we meet them halfway?
This is a shocking development.
The only consolation is that Google's next privacy compromise with the ad industry definitely won't suffer from exactly the same problems. The best thing for us to do now is to assume that this is a completely random, one-time fluke that doesn't reflect anything on the industry's character. No need to change the way we engage with the advertising industry on privacy issues because of it. We should keep offering them compromises that make it easier for them to track users, and keep assuming that they'll in good faith regulate themselves.
Google is known to fingerprint you on their sites and this practice will continue unless some sort of political action is taken to make fingerprinting illegal. WebGL is not the only heuristic used to reliably determine it's a specific device accessing a site, but a whole slew of techniques can be used to reliably determine it is 'you' who is on a site (you can even detect if a browser is running in a virtual machine, among many other techniques to fingerprint).
To mitigate this, I do most of my browsing with JS disabled by default, and if I really need JS turned on (for a site I trust like my bank), then I temporarily turn it on for that specific site. Also you can just disable WebGL in Firefox in about:config but keep in mind, there are many other techniques Google and `ADTech` in general can use to fingerprint you.
Firefox has per-site settings for whether the canvas should be accessible which are very useful, but they don't have per-site settings for WebGL, it's either on or off for the entire profile. Which kind of defeats the point of Canvas blocking since (at least last time I checked) WebGL fingerprinting is possible regardless of whether Canvas can be read from.
I'm sure there's some technical reason, but it really seems like turning Canvas reads off for a site should also turn off WebGL.
There's been no Mozilla comments on the bug yet. Perhaps so many sites use WebGL that the user would be pestered with too many permission prompts?
The objective for the user should be to send as little information as possible. If a fingerprint shows the user is not running JS and is providing only a very minimal, generic set of information, how much value is there is trying to serve ads to that user. Users who want better privacy should be trying to reduce the amount of information they send. Maybe the first movers in that effort are "fingerprinted" as being privacy-conscious, tech-savvy, etc. That is probably going to result in less ads served to them, not more. Eventually, when most users, "the herd", is sending the minimal amount of information, the fingerprints all look similar.
Think it through. Advertisers do not care about users who will not indiscriminantly run JS. They go for the low-hanging fruit.
Give me a VPN that regularly geolocates me at a Starbucks 30km out of town. Give me plugins that stuff my search history with a fixation on the Cincinnati Bengals and replacement parts for a 2013 Hyundai Accent. Yeah, they might see my actual traffic patterns, but the goal is to make it expensive and hard to filter the real use from the elaborate story.
To my understanding, Google (and many other sites) use WebGL and other fingerprinting techniques to distinguish real users from bots.
This does not mean they use it to track individual users (if that were even legal in Europe under GDPR).
“If you’re going to see an ad regardless, would you rather it be relevant to you or not?”
The answer is and always has been “I don’t want to see an ad in the first place, I don’t want you collecting any information about me under any circumstances, and anything that makes ads spots worth less is a positive.”
You know the cheeky “you won’t see fewer ads, they’ll just be leas relevant” line? The only long term solution to actually see fewer ads is to drive their value down to nothing.
The fact that you can pay to remove ads on a lot of services kinda gives the game away that they’re a net negative. Why would you ever want them gone if they’re so useful and helpful?
AdTech is such a garbage industry — drunk on their own delusions of facilitating commerce and helping businesses reach customers while being so annoying and vile that the only way their products to function is to insert themselves into every facet of life because nobody would ever seek them out on their own. Bleh.
You can only hope that whatever marketers would do instead to promote their stuff is less annoying or damaging. I have my doubts.
There's a huge amount of content that people are just not willing to pay for, but would gladly view ads. You may be willing to pay for Youtube Premium and Twitch Plus or whatever, but the vast majority of people do not.
Do I feel like I'm a horrible person because I help make websites more money so they can stay in business? Heck no. It's the only reason 90% of these sites are able to exist in the first place.
-- your next health insurance change to triple your premiums because of that previous bill you had such a hard time paying down
-- your mortgage / refi / loan application to be denied for factors completely separate from your actual credit report, but have made it into a reputation system
-- you get quietly, passively bypassed for that job application from the reputation hit from that one really poorly thought through social media post last century.
Data mining and data stores that affect people's lives and opportunities, that aren't just obfuscated, but actively secret are a blight.
If you have a service like YouTube that provides so much value and is such an economic multuplier that we can't possibly imagine society existing without it then why don't we just pay for it? The fact that we have no system to fund public goods that aren't ads and taxes is a huge failing. You're basically just describing a tax system that is paid in consumerism which sucks because it's inherently regressive.
If you have a product which is genuinely useful to hundreds of millions of people but that the value only materializes when it's available for 'free' to everyone then we should have ways of getting you funded that isn't attention or convincing individuals to pay you a subscription fee.
You're hitting on an important economic function that ads are currently doing but then twisting it around and saying that there's possibility for anything but ads to perform that function.
Why is it regressive? High earners pay more for ad-funded sites than low earners. That's not regressive. It's not progressive either, strictly speaking, but it's better than subscription fees which are regressive because everyone pays the same absolute price regardless of their disposable income.
I'm not certain this is true. And targeted advertising certainly makes it less true, not more. Advertisers are not excluding low-income users from advertising, instead they're targeting more products at them that those users are more likely to buy.
Untargeted ads for higher-cost luxury products might make the Internet cheaper for low-income users, but that's not what is happening in the ad industry right now. Ads exist to get you to spend money, and they are just as targeted at poor people as they are at rich people.
I wouldn't be that surprised if the effect is the opposite, since poor people have less access to research resources, comparison shops, and trials that would allow them to combat the psychological effects of advertising. They're also generally under a lot more stress and time-pressure when they shop than rich people are, which is likely to make them even more vulnerable to manipulation during a difficult purchase decision.
I'm pretty sure it is, because nothing changes the fact that you can ultimately only spend what you earn (even considering credit). A share of that spending goes to advertising.
If someone can spend 10 or 20 times as much as another person after basic food and shelter then that difference trumps all other factors by a very wide margin.
I don't think it matters much, but just for the record: I don't believe rich people comparison shop as much as lower income earners. They buy what they fancy and they throw away what they don't like. I also don't think rich people are much harder to manipulate (but I'm not sure about that). They think much less before tapping that buy button. I know that much.
I'm not sure I follow? Google ads don't cost a percentage of the final product price. How much I spend on advertising might be entirely unrelated to the per-item cost of my product -- and how profitable my company is might not have anything to do with how luxury my company is, it might just be down to market penetration and my profit margins. Plenty of companies make enormous amounts of money targeting poor people.
Does an ad for a five-star restaurant on Google cost more than an ad for Taco Bell? And it's not like rich people are being shown a larger quantity of ads on a website than poor people.
If that is true, then per person contributions to ad funded services would be roughly propotional to personal spending. Someone spending 10 times as much as another person would also pay 10 times as much for using Google search or Youtube.
If these services were subscription based then both would pay the same price in absolute terms, which is very regressive in comparison.
But just to make this clear. I don't claim for a second that my extremely crude calculation is anywhere near correct. What I'm saying is merely that subscription funding is very regressive compared to ad funding.
I don't know what the exact extent of that difference is and I cannot break it down to the level of pricing specific Google ads.
> If these services were subscription based then both would pay the same price in absolute terms, which is very regressive in comparison.
This is the part where you kind of lose me, unless a rich person is also watching 10x as many videos on Youtube. It seems like you're saying a fixed percentage of a person's spending is going to the websites they visit, but why would that be the case?
You will see the same number of ads on a Youtube video regardless of whether you're rich or poor. And the cost of each of those ads -- the amount of money that gets paid out by the business -- is based on the competition for the ad slot, not the price of the product. I wouldn't take it as a given that products like Taco Bell and Pepsi spend less on online advertising than a luxury watch brand. If anything, I would expect products in crowded consumer markets (ie lower-cost, non-specialized, mass-market goods and services) to have more competitive ad slots that cost more money to target.
So I understand that rich people spend more money, I agree with you on that. But I don't see how you're connecting that fact to the idea of more money from those rich people going to the websites who are displaying ads. I don't see the thread of logic that says that a product costing $500 per unit is contributing more ad money to a website than a product that costs $5 per unit.
Very roughly yes. Some percentage of a typical company's revenues is spent on ads, and revenues from each customer are obviously proportional to that customer's spending. It's the same thing (leaving aside sales taxes).
Companies try to maximise the effectiveness of their advertising campaigns. The effectiveness depends on how many people actually go ahead and buy the product relative to how much the ads cost. If running ads on Youtube is less effective, then ads prices on Youtube would have to fall and Youtube would earn less.
Let's say only extremely poor people were using Youtube. None of them would ever buy a high-end smartphone. How much would high-end smartphone makers pay to Youtube for the honor of running ads there? The answer is zero.
Now let's say there are two groups of Youtube users. One group never buys a high-end smartphone. The other group buys one every year. Now it makes sense for smartphone makers to fund Youtube through their ads, but only the group actually buying smartphones pays for it. So the rich group effectively subsidises the poor group's Youtube usage.
I have chosen an extreme and unrealistic example to explain the principle. In reality, there will be a mix of products. The cheapest ones will be bought by almost everybody in roughly the same quantities, and some luxury goods are never advertised on Youtube at all. But the relationship between per person spending and that person's contribution to ad funding for the sites they visit still roughly holds.
This is what I think. I'm not an economist though. There are certainly many open questions as to how strong this redistribution effect is and what the effect of ad targeting is. But the claim that there is no such redistribution effect at all seems extremely implausible to me.
Good point. Netflix replaced cable for a reason. People can stomach ads up to a point but I don't think anyone likes them.
- I don't trust ad networks to give me more relevant ads, the data they currently have has not made my advertising experience better, so I don't see why giving them more data is going to fix the problem. I don't see strong evidence that advertisers know how to make useful ads regardless of how much data they have.
- I don't trust ad networks to target responsibly for my benefit. Ad networks are trying to manipulate me into buying products, they are trying to affect how I view the world. That's a hostile relationship, they don't have my best interest in mind, so "more effective" is not necessarily going to translate to my benefit. Ad networks are not trying to make ads more useful to me, they are trying to get me to buy stuff.
- I don't trust ad networks to only use tracking to improve relevance. I take it as a given that their tracking will be used for underhanded price changes, changes to UX to make it harder for me to complete certain actions, deal availability, geolocking, changing results when I comparison shop, and other anti-user practices.
- I want to have control over what data goes into my advertising profiles. Tracking me everywhere forces me to treat my advertising profile like I would treat a cat -- I don't want to do reinforcement training on my ads. With tracking, if I want to be advertised a certain product, I have to reinforce to the network that I care about it. If someone sends me a link, I have to think before clicking on it because I don't know what that will signal to advertisers. This is a really awful way to interact with computers in general, and it discourages people from freely browsing the web.
- Ad tracking creates an additional security risk for my data. I might get advertised an embarrassing product at the wrong time in front of the wrong person, that information might get leaked to other 3rd-parties that are somehow even less scrupulous than advertisers. There are multiple instances of ad networks effectively doxing people, outing their secrets. It's not safe to trust ad networks with that data.
- Even if none of the above was true, I don't take it as a given that even at a purely conceptual level targeted advertising is better than untargeted advertising. I disagree with the philosophical premise behind that kind of marketing, I think that marketing should be user controlled and based on signals that users consciously give about what they want to see. I think in most cases that users should start the search for a new product themselves and decide what they want advertised to them. Even if the advertising industry was ethical (which to be clear, it's not), I still don't want targeted ads.
- And even if I did want targeted ads, heck anyone who is tracking me for advertising purposes without my permission. If your product is so heckin great, then it shouldn't be a problem for you to get me to opt into tracking. The lack of affirmative consent is a problem, regardless of the outcome. You have to get people's permission before you do this stuff -- even if I'm happy with the result, that doesn't excuse you from asking my permission. And no, collecting the data anyway but just showing less relevant ads on the front end doesn't count. I don't want the tracking code on my computer at all unless I've invited it to be there.
Now, completely separately from everything above, I also don't want to see ads at all and I think everyone should block them and burn the entire industry to the ground regardless of the consequences. BUT that is not the primary reason why I'm against fingerprinting and user tracking. Even if I loved ads, I still wouldn't be OK with the kind of tracking that tech companies are doing, and I still wouldn't want them to fingerprint me.
And no, I am not being sarcastic, nor do I think I'm being overly hyperbolic in this.
It is an improvement to privacy. Cookies uniquely identify me with no other information required. FLoC does not uniquely identify me with no other information required.
The opt-out is similar too: block cookies in the browser or block FLoC in the browser.
Basically, it’s a way for google to implement fingerprint resistance in chrome and default to blocking third party cookies without killing their own funding source.
I think this is a pretty good take. With floc there is a possible storry to tell companies that want to target/customize, but only in the amount tolerated by the users.
Once thats established, it's much easier to go after shutting down businesses using less ethical means.
No, not really - ETP only blocks the most technically literal meaning of "third-party cookie" while still allowing plenty of tracking scripts to work with shared first-party data.
Chrome has well over 50% of the desktop browser market share, which by some measurements makes it the only major browser, and FLoC is definitely a prerequisite to Chrome disabling third-party cookie support.
Cookies were going away regardless, every other browser is doing it, Chrome is not powerful enough to go against the grain on this issue.
Separately from removing cookies (which was always going to eventually happen), Google proposed FLoC because they claimed it would help advertisers accept the change without encouraging them to build another equivalent tracking method using fingerprinting. Unsurprisingly, advertisers immediately took FLoC and used it to build another equivalent tracking method using fingerprinting.
The mistake here is meeting the advertising industry halfway. Just remove cookies. You don't need to propose anything else beyond that.
It’s been something like 4 years since Safari started blocking cookies. You say Google isn’t powerful enough to resist, but Chrome has >60% market share.
It is definitely in Google's best interest to act like FLoC is necessary to remove cookies, but I don't take their marketing at face value. They care about being competitive with Apple; they were even forced to pretend to care about advertising IDs after iOS's recent changes.
:shrug: pretty much every other browser has rejected FLoC as well, so I guess we'll find out if Chrome is really able to just go their own direction. But I think this is one of the rare instances where people are overestimating Chrome's power.
I don't believe Chrome's team would be doing any of this at all if they didn't see the writing on the wall about where the industry is going. My take is that they're trying to get in front of an inevitable industry-wide change to mitigate it's impacts on their core business. It's not out of charity or real concern for user privacy that they're proposing any of these compromises, Google would be perfectly happy to stay in a world with 3rd-party cookies if they thought they could get away from it.
A lot of their recent proposals start to make sense when viewed through that lens. See their effort to propose a standard where 3rd-party sites can be treated like 1st-party. See also their increased efforts on moving away from URLs for domain scoping. See also Manifest V3. Google is scared about this. The are scared of the situation getting out of their control.
And even if Chrome is powerful enough to resist removing 3rd-party cookies forever, I'd almost prefer they do that. It'll make it easier to get people to switch off Chrome when it is objectively less private than every single other browser in meaningful, easily demonstrable ways. And we do need to figure out a way to break up Chrome's stranglehold on the web anyway, so every reason helps. With the addition of FLoC, Chrome will already be less private than other browsers since FLoC is a strict privacy downside over just removing cookies. So it's good for that loss of privacy to be even more private, and to remove Google's ability to hide behind a confusing narrative about how actually their fingerprinting vectors are good.
This seems all quite speculative. In the first paragraph they describe FLoc IDs as changing constantly - why would they assume new IDs are not being generated, and groups are not constantly being mixed?
> “If your behavior doesn’t change, the algorithm will keep assigning you in that same cohort, so some users will have a persistent FLoC ID associated with them — or could."
When combined with other information that is already being used (such as canvas fingerprinting and other techniques), this looks like it can help narrow it down even further.
> “We can use that as another signal to create a stable identifier for them.”
The best way to explain it is that a lot of companies have been making significantly more money than their technology is worth. A number of initiatives have attacked the data-in side of the equation so the underlying tech is showing how questionable it really is.
This type of research should be filed under “could be big” but at this point it’s closer to public relations for ad tech firms than “the sky is falling, become Amish.”
Brave is perhaps the most ethically challenged browser out there. Hopefully they have stopped doing this, but they were injecting their own ads instead of what the publisher put on their website.
The reader can guess how Brave expects to make money with a free browser that is handing out BAT :-)
Personally I reserve judgement, but I see why some look at it this way.
Of course sites can block requests based on Brave's user-agent string if their business depends deeply on ad revenue and they consider users with ad blockers to be abusing their service. That's their prerogative if this really irks them and it's worth losing the users. On the flip side if this becomes popular enough then site owners see real money on the table and they'll opt in to picking it up. That seems like an easy fix for them. If I was a site contemplating either blocking content to users with ad blockers or allowing cooperative users to opt into a more private client-side ad experience which still gives me the opportunity to collect revenue for their traffic, I'm pretty sure I'd choose the cooperative approach.
(Not on Twitter, can't run the experiment myself.)
After a few weeks of this you'll get some pretty strange tweets from really obscure brands. You will however still regularly see promoted tweets from financial services companies, because there are apparently an infinite number of them.
I currently see no reason to switch from FF which I use everywhere and benefit from Sync. Hopefully someone else will find it of interest.
I have a few apps that I just use the mobile web version. The bonus is that there is no app collecting gps and lists of installed apps and such, though, in theory, the new permissions help with that.
Think about it like in this oversimplified/stupid example: A "fix" or change is implemented in Chromium. In the long run (intended or otherwise) it turns out it moves pixels slightly differently than the way Firefox does it. If almost all your visitors use Chrome you have to design your site to be perfect in chrome and you might do so in Firefox. Now you have sites that look as intended in Chrome but maybe look as it should in Firefox. This make Firefox users use Chrome more.
Now Brave et. al. is part of the problem, helping drive the only real competition out of market (and killing their only way out should they some day need to change engine).
In extremely complex code this is very hard to not be a part of and a company like Brave is way too small to fork Chromium for long if at all.
What this means is that this engine becomes the de facto standard of the web and this standard is controlled by the main contributor of the engine.
Every browser is now constrained by Google's own decision about what should the web be. Sure, they could technically disagree by forking WebKit/Blink, but since websites are made to work with Blink, a disagreement means being incompatible with such websites.
That's already the way it works in the real world... the standards are irrelevant and ignored, only caniuse and browserslist actually matter. Like it or not, Blink is the new IE6, and its marketshare is only increasing.
Ideally it would be something not controlled by Google but by an independent third party (hand Blink over to Mozilla, deprecate Gecko?), but good luck with that.
Maybe this system wouldn't be as ideologically pure as building compatible renderers to a set standard, but it would result in far better developer and end-user experiences as the web quickly standardizes to a single renderer. The world simply does not need 10 different ways to display HTML with 90% compatibility.
Of course those are just random made up ideas but the point I want to make is that it's giving only one actor the power to define what the future of our only and sole international knowledge network will be.
The thing is, the existence of Gecko never actually meaningfully challenged corporate oligarchies. Mozilla's mission was noble but they were never particularly effective at it... web standards went from IE6 being the defacto standard to the Wild West for a while to Webkit dominance to a Blink/Webkit duopoly. There was never a period where we actually saw a standards-based web ecosystem. It was always renderer-based. In that sense, I'd argue the Gecko contributors (and Mozilla as a whole) would have more influence over the web ecosystem if they abandoned Gecko and focused on the Chromium/Blink project instead, especially if they had override/veto power over questionable commits from any one corporation. As it is, Gecko/Firefox is less than 5% of the web. You can't influence, much less set, any real standards when you're just a rounding error.
Like it or not, Chromium IS the standard. Only when Mozilla realizes that will they actually have a chance to succeed at their mission, instead of being the beloved but always-losing underdog...
One way or another, browsers are heading towards engine homogeny (or hegemony), but Firefox and Safari are at least slowing this process down to some extent.
I know it might be the extensions (I have 20+) but I seriously don't care. Chrome manages to be fast with the same set of extensions.
I don't like Google but Firefox's slowness is a real strain on my productivity and brain well-being. Hope they improve even more soon.
I am on an iMac Pro btw. Stuff like this should not ever happen on a workstation.
Can't describe it perfectly. The UI is responsive but page loading is just severely slowed down -- not always but often. I have a bunch of privacy extensions but again, they don't seem to make Chrome sweat.
If you are using uBlock Origin, you may want to see if un-checking "Uncloak canonical names" option in the "Settings" pane in the dashboard makes a difference.
There have been reports of slow page load with some network configurations, and this has been linked to DNS lookup in uBO.
Chromium-based browsers do not support CNAME-uncloaking, and so this would explain why the issue is not present in Google Chrome.
* * *
EDIT: pay no attention to the text below, I have misread the linked documentation. uBO isn't using external proxy for any network requests.
Not sure how much -- or at all -- you're involved with uBO. Your name does ring a bell though so I'd like to remark to you that making the users' browser use proxy is a step too far. It shouldn't automatically be enabled.
A privacy extension should do everything it could locally and stop there. If I one day figure it's not enough then I'll set a privacy VPN (or use an existing one).
I don't want that decision made for me on my own machine without my consent. :(
And apologies if my comment is misguided -- I only skimmed the linked page and I might have misunderstood.
uBO does not do this, and nowhere is there any suggestion that uBO does this.
Users configure their own network settings, and it was found that in some cases when the browser is configured to go through a proxy (through either OS or browser settings), uBO's CNAME-uncloaking feature, which requires a call to the `dns.resolve()` API, would cause undue delay to page load. The root cause is outside uBO and outside the browser, it lies in the proxy.
FYI I'm asking, not doubting or blaming or whatever.
But the fact remains that I installed 100% the same set of extensions on Chrome and it loads pages at least 2X faster.
I might be a programmer, I might care about putting a rod in Google's giant personal-info-gathering machine, and all that good stuff that makes us feel we're making a difference in the world -- but when 1/3 of all my pages load more slowly in Firefox, I can tolerate this only for so long.
So I don't really know which factor is the real page load speed detractor. I just wish the Firefox team fixes it.
You can't really compare that to a native linux/windows experience.
You're telling me Firefox under macOS is using Safari's engine? If so, wow. Extremely disappointing.
Not sure how it works, but the EFF doesn't seem totally confident that it can detect those affected: "This page will try to detect whether you've been made a guinea pig in Google's ad-tech experiment."
There's no reason why the website would not be reliable at detecting whether FLoC is enabled on your browser.
You can also used https://floc.glitch.me/ which was linked in one of the blog posts from google.
How can you have an Internet without using IP addresses? Do you just use Onion routing all the time?
I think cookie + browser fingerprinting is a much better way to track people in this situation, because it removes the uncertainty associated with dynamic IPs and multiple users behind a NAT.
For you that don't live in small countries being 2 hours away in Sweden means you have to pass several other independent cities on your way there.
Newer ISPs use CGNAT so you'll be sharing your public IP with a few neighbours (7+you in the case of my ISP).
They only identify people when joined with other information.
Private relay is secure as long as Apple and the third party do not cooperate, but end to end flow correlation is much easier because streams are not isolated.
Onion routing is much more sophisticated than private relay: even end to end correlation is more difficult because of how virtual circuits are made.
"If you are a website owner, your site will automatically be included in FLoC calculations if it accesses the FLoC API or if Chrome detects that it serves ads."
Personally, I don't trust Google that much. Chrome knows which websites I've been to, so it could easily (accidentally, or on purpose) just include any site. Google also has a history of starting conservatively, then rolling out stuff a little at a time. "Boiling Frogs".
Rolling out the header everywhere seems like a good way to keep Google honest about it. Chrome can obviously still do whatever it wants, but it would be harder to explain for them if they shared info on an explicitly opted-out site visit.
It's also just a sort of ceremonial way of expressing dissent with the idea in general. In a way that people could collect statistics on and track.
It does very little; effort is better spent getting people off FLoCed browsers like Chrome.
More info: https://seirdy.one/2021/04/16/permissions-policy-floc-misinf...
This is what's really bad about FLoC; it's so hard to fight back on behalf of oblivious Chrome users who didn't opt out. For the uninformed, there's no winning move.
And I'm unconvinced on this part:
"If your website does not include JS that calls document.interestCohort(), it will not leverage Google’s FLoC. Explicitly opting out will not change this."
I try to know everything running on my site. But especially with things like a deep npm dependency chain, I know not everyone knows everything that's running on their site. Or maybe chrome will interpret an image that happens to be an IAB size as an ad. I recall a certain storage related company recently running Google Analytics on an admin page, something the tech team didn't intend to happen. But shit happens.
I think it's worth putting up, both for whatever limited help it provides, as well as a visible vote against FloC.
The fact that the above sentence sounds unrealistic nowadays is extremely depressing. If malware distribution through web browsers wasn't already the norm, it'd look like common sense.
I do admit that "a visible vote against FLoC" is a good reason to put this header; I've updated the article.
I don't think these votes will sway Google but I do think they'll spread awareness. I still think that a better use of our time is getting users off Chrome.
FLoC (and FLEDGE, and PARAKEET, and a million other bird proposals) is being used as a way to mitigate some of the loss that publishers and advertisers will see when those privacy measures are put into place.
I think this illustrates that the whole bird-brained idea is to placate the advertisers so they won't run to congress, while continuing to allow Google to fingerprint people with its own, better, data, thus increasing its advertising advantage.
I sit in on many of these W3C meetings (I'm not from Google) and the discussion is always "given that we want to achieve X definition of privacy (where X varies depending on the proposal), how can we mitigate some of the fallout that will happen". It's never "how can we defeat these privacy measures that are going to be put into place so we can keep the status quo".
You can argue that advertising is a net negative for the web or that it's evil or whatever you want, but if you frame it as "how can we take steps to make things more private for end users without completely destroying the way business on the web make money" then I think the current path is a reasonable one.
Modifying your last sentence illustrates this nicely:
You can argue that slavery is a net negative for the world or that it's evil or whatever you want, but if you frame it as "how can we take steps to make things better for slaves without completely destroying the way business on the world make money" then I think the current path is a reasonable one.
Dropping third party cookies is just one small piece of it.
Ads by themselves aren't bad and neither is personalization. These methods of monetizing content have led to the huge surge of interest, talent and capital into Software Engineering companies and has arguably funded some of the most remarkable technological advancements of the past two decades or longer if you consider all the things that have come out of research projects in companies with primarily an advertising business model.
There are certainly valid criticisms of personalized advertising and personally, i believe there is too much concentration of power in the market.
However, i'd love to hear intelligent peoples perspectives here on how we preserve this wildly successful business model while tackling the abusive parts? If tomorrow the consequences of these restrictions by platform owners such as Apple or Google end up concentrating more of the power within the walls of the largest players, we're simply trading one set of problems for a much larger set.
No regards for privacy at all, this rampant data harvesting and spying must stop now.
You don't get to be the biggest entity on the internet and keep the cutesy hacker facade, they've got an enormous responsibility to the community now.
Why do anti-google articles always have some one claiming unfairness or singling out as one of the first comments?
I have often noticed that the mods alter the ranking of posts, but I don't think that they did in this case (they admitted doing it in some cases). The content on HN is very much curated/controlled.
"New thing not perfect!"
> As privacy and data ethics advocates warned, companies are starting to combine FLoC IDs with existing identifiable profile information, linking unique insights about people’s digital travels to what they already know about them, even before third-party cookie tracking could have revealed it.
> Advertising companies are already strategically gathering FLoC IDs and linking them to identifiable data or analyzing them in an attempt to uncover information about people that may not have been known before, mimicking how they have parsed what third-party cookies told them about people’s behaviors.
I see the submitted title has been altered anyway.
There is a fundamental tension between me wanting to walk around the world as a free human individuum and a large group of people who for some reason or another want to know exactly what kind of fart I prefer so that they may match me with the correct kind of fart-cushion so that I might buy it.
The idea that I might not want to have a fart cusion in the first place, and If I fancied the idea of getting one, I might go to the fart cusion store to find the perfect fart cushion does not seem to occur to these people - I am sure technically they have thought of the possibility, but they do not respect me or my boundaries.
What this "new thing" does, is manage my suspected fart-cusion preferences inside my browser, instead of some "cloud" to then tell fart-cusion-selling-enthusiasts that indeed, I might be one of these people with an interest in fart-cusions.
This "new thing" doesn't change anything about the fundamental issue that my thoughts and aspirations as a fart enthusiast are not the business of any moron who want's to market their newest fart-cushion invention to me.
Don't forget that there are people who want to find out if you are a fart enthusiast, so they can then use that to coerce you into "playing ball" with them.
Sort of the digital equivalent of the "$5 wrench." Social media and adtech have been a freaking goldmine for spies.
Those using ad-blockers are going to use them anyway, but some assurance that I won't be tracked would go a long way towards me turning off ad-blockers on the sites I don't frequent, and even those that I do if they don't have a ad-free subscription. Those wanting personalized ads, can set the Do Not Track preferences accordingly, possibly by site.
Too busy to do a long post in detail, but short version is that advertiser's acceptance of DNT was entirely dependent on people not using it. If Microsoft had left it as opt-in but the majority of consumers had turned it on, we would have seen the same result.
You can see the same principle to explain the response to iOS's privacy changes, which are not opt-in or opt-out; they force the user to make a choice. The ad industry is still furious about this.
DNT could have only worked if no one used it, and that is not a privacy outcome that is worth pursuing or advocating for. It's not Microsoft's fault that DNT went away, Microsoft was just the excuse the advertising industry needed to avoid it. I don't think that DNT was ever anything other than an excuse the industry could use to keep tracking people and avoid legislation without changing anything. Microsoft didn't take it away from you, they just pulled back the curtain and showed you the truth about it.
A privacy standard that fails as soon as it's turned on by default is almost worthless and it really shouldn't be advocated for in the first place.
It is a waste of time and energy to advocate for a privacy standard that can only ever be opt-in. Doesn't matter what Microsoft's motivations are, doesn't matter what they're doing elsewhere. A privacy standard that gets abandoned if it's ever defaulted to "on" for new users is worthless.
It's not Microsoft's fault that DNT went away, DNT was always going to go away the moment a sizable portion of the population started using it.
If Microsoft had asked users when starting up their computers whether they wanted to turn on DNT, and if a majority of users had turned it on, the advertising industry would still have had a meltdown and rejected the standard. We know this because that's exactly their reaction to iOS's changes, which are not turned on by default but force the user to make a deliberate choice when opening the app.
Also changes the perception that Microsoft were doing it to attack Google.
Once it's a standard you'd get it enacted in law that companies have to respect the setting.
I don't know that an asteroid isn't going to kill me in the next 3 minutes, but I can make some educated guesses about the world. My read of the situation is consistent with the way that the advertising industry has reacted in every single similar situation. It's consistent with the way that they're acting right now with tracking restrictions that require affirmative consent. There's a pattern here.
> Also changes the perception that Microsoft were doing it to attack Google.
> Once it's a standard you'd get it enacted in law that companies have to respect the setting.
If we're going to talk about suspicions, I have seen zero evidence that a DNT standard would have actually helped get legislation passed. Generally speaking, that theory does not line up with my experience of how lawmaking works.
Nor does getting it passed into law mean that advertisers wouldn't immediately lobby for it to be revoked as soon as any company turned it on by default. Heck, they'd try to force it to be opt-in instead of opt-out in the law itself if given half a chance. They would argue that Microsoft was being anti-competitive (exactly as you're arguing right now). Facebook is currently arguing that Apple's privacy controls are anti-competitive, there's no reason to believe they wouldn't make the same argument about Microsoft if DNT was turned on by default, regardless of what the law said.
A privacy standard that can not be turned on by default is not worth fighting for.
If they are not able to make those numbers, they fear becoming irrelevant.
These ad-tech companies are simply stuck and unable to innovate due to their current clients and promises.
Every time I see an ad that relates to my current unique interests I feel spied upon and concerned that my private life is leaking without my knowledge or control.