My link: https://pbs.twimg.com/media/Byv5uWSIIAEf38C.jpg
Facebook made: https://pbs.twimg.com/media/Byv5uWSIIAEf38C.jpg?fbclid=IwAR2...
I guess if FB really wants, they could make a second fetch to ensure that their added params don't break the third party server. Or could they add a whitelist of domains that use their first-party tracking?
I really don't like the end result right now, looking like "the web works" from inside FB, but not when you try to follow this link out of it. I don't believe at all that that is FB's intent here, but it's just one more time that some silo breaks another part of the ecosystem, and to an untrained eye it looks like the third party is the culprit.
Regardless of the protocol - it's not unreasonable to return a 422 or 403 by default for that (malformed) request when first seen - as it would indicate that something sketchy may be going on - which can be whitelisted later
So if you are part of the group who sees this as a good thing, I'm genuinely interested to understand why you see this as a good thing and whether you view the mass surveillance of the general public by advertising companies as bad?
Previously, only large brands and national/multi-national corporations could afford to advertise at scale and reach customers through TV/Radio/Newspapers (that too with a high minimum spend).
Now your local mom and pop bakery could have a spend as low as $100 a month to reach their customers and help drive their business.
The world is not black and white, and neither is the morality of advertising.
I hope this perspective was useful to you.
I don't see any indication that advertising online is an industry digging its own grave. On the contrary, the increase in quality of ads over the past 10+ years (especially in terms of unobtrusiveness and relevance) would suggest to me that online advertising is digging its way _out_ of the grave it dug itself with flashy, irrelevant ads throughout the 90s and early 00s.
Then, I started needing analytics for my own business. Without analytics, I wouldn't be able to sell with efficiency, and therefore, I wouldn't have a business. Granted, the anti-consumerist in me thinks maybe as a society we shouldn't be so concerned with our efficiency to sell. But, we live in a capitalist world, and I don't see that changing any time soon.
The way I see it now, I'm less concerned about tracking than I am about how big some businesses are -- especially in this space.
At every start up I know, they use analytics, and no one is doing anything spooky. But, I'm sure there's plenty of spooky stuff going on at the FAANGAMUs.
Where do you draw the line? This is parallel to the discussion around government surveillance. Just be cause they / you can, doesn't mean they / you should.
If Internet tracking had no potential use to governments then they'd be regulating the shit out of it. The problem is that governments want their own noses in the same trough, and so all these privacy-invasive technologies continue to be developed. The fact that it's not illegal means that anyone with the ability to implement it can, as long as they can sleep at night.
As to solutions that could help with "selling efficiency", maybe some kind of agreed tiers of analytics from benign to spooky that users can opt-in / opt-out of when visiting a website or using an app. Which GDPR is a bit of a kludgy solution for. The problem is that it only takes one bad advertiser to break agreed rules and the trust is gone again for all advertisers.
One bad apple.
Analytics are unquestionably useful. Collecting the data without user consent is what potentially should be regulated.
The general retort I hear is what do you have to hide? Like the only thing that people want to hide are the bad and evil stuff.
Facebook, Amazon, Apple, Netflix, Google, AirBnb, Microsoft, Uber
Edit: M for Microsoft, derp
Uber had that secret API access granted directly from Apple so it could see which apps you had open at all times (in case you opened Lyft) so it could charge you different rates.
AirBNB is similarly large, so I thought they were worth mentioning.
I had this idea of an evil group of people getting together everyday and looking at this data and somehow using it to puppet my entire online-life.
Sure, this group of people exists at every decent sized online company, and sure they're trying to get you to spend more time and money on their site/app/whatever, and sure this tracking data helps them.
Sure, SOME of these websites are peddling fake news or selling scams or preying on the poor/unfortunate/uneducated/etc. But I think that's the exception, not the norm.
Most successful companies make a product people genuinely like. There are millions of people that would buy and enjoy this product if they knew about it. Most companies are just trying to use this tracking data to get their product in front of as many of those people as they can, and as few people that don't want their product. They're trying to fine-tune their messaging to make sure it appeals to the people that actually like their product. They're trying to use it to figure out how to BETTER make a product people actually want!
Again, if you're saying that increasing our efficiency in sales is a bad thing, you're saying that capitalism is bad. But I've just come to see this data as something that enables product evolution to occur much faster. I see this data as something that's helping the world, mostly, get more of what it wants.
Like everyone says, Capitalism is the worst economic system, except all the others we've tried.
Unfortunately, I've seen too many product decisions catering to the manipulative aspects of adtech. UX often suffers, not improves with ads. Online platforms all seem to follow the same game ad monetization plan these days which results in messes like Frankensteinish apps--see official Twitter app.
As for actual hands on manufactured products or services, I'd like to know how ads improved the UX.
I guess your summary is about right.
> we live in a capitalist world, and I don't see that changing any time soon.
We were living in a capitalist world two decades ago, and we didn't have a significant amount of tracking back then. If you are concerned about your competition using tracking, then just try to make a better product.
Analytics can give you a decent glimpse of revealed preferences, which may or may not be what you're after.
Whether or not this is a good thing, depends on a lot of subjectivity, sure. But suppose you run a porn site - if you asked most users what they wanted in porn (before they had seen any), they would probably say one thing. If you examine what kinds of videos people look at, you'll see another. (This theme, with actual data from pornhub, is explored at length in the book "Everybody Lies" by Seth Stephens-Davidowitz.)
Both routes (asking and instrumenting) have their uses.
I'm kidding of course, but the idea is that analytics tell you a part off the user story, but doesn't answer the deeper "why" questions. It certainly has a place in tech, but it's less than what it's currently afforded.
There's a flawed belief that it's necessary. UX on the other hand suggests otherwise. AdTech is not concerned with UX though and tries to wrap targeting in some kind of pseudo user benefit—spin.
Good products and services sell even without tracking. Advertising is an economic powerhouse though and will always push for anti-UX trends because it fundamentally runs polar opposite to the user experience.
Advertisers study how to sell a product, and the most important product they have to sell is advertising.
However, it's not hard to reason why people whose livelihoods depend on being able to track users and increase the value of their ad inventory would be happy about this.
People are unusually good at separating their personal interests from consumer interests. I've observed this emotion arise in many entrepreneurs first hand, be they in the brick retail, or conventional energy or obsolete auto parts, it's common for people to be happy about events that benefit their livelihood even when it has a negative impact on humanity or that ecosystem.
Those people can go hungry or find another line of work. I have zero compassion for that behavior. Justify it how you want, but most people abhor it.
Basically, FB is expanding its tracking, allowing 1st party vs. their third party cookie tracking. I suspect the click-id query string is part of that rollout. This helps it get around things like Apple's new ITP (Intelligent Tracking Prevention). 2.0 in Safari.
I’m pretty excited to see this roll out more broadly.
FB just doesn't understand the optics they create.
Sure they do; they just also know that the vast (VAST) majority of people don't understand the implications and/or don't care.
I suppose there is someone even thinking that Portal will be good for their home.
I think it’s a great product and can’t wait to have mine at home.
All the fuss about tracking is non-sense. Ads are a great way to monetize products that you want to make available to a large audience. And obviously as a user you want meaningful ads and not just a bunch of garbage. To do that tracking is necessary...seems like a straightforward value exchange!
I also want to note that I buy stuff frequently from ads...some of my most loved items found me through ads! It’s frankly a great way to discover great stuff.
Do I sometimes see ads that are not relevant? Sure, just as I see post from friends/family that are not relevant...I just scroll by, easy as that!
If this wasn't Facebook this wouldn't be news, gclid has been around for years.
A lot of malicious links are just base64'ed to another redirect service; to another base64'ed address (continue as long as your head can keep up.)
UTM parameters tag campaigns at the aggregate level, to be used for reporting. The fbclid is almost-certainly unique to the click. While you could make, e.g., utm content unique per-click that's not what it's for. Anything in a UTM parameter is intended to be human-readable, and will almost certainly appear as-is in a report somewhere. Click ID parameters are internal IDs used to join data sources, which is not the type of data that should go into UTM parameters.
Note that Google Analytics, the tool that invented UTM parameters, itself does not use UTM parameters when it does this sort of thing. Google Analytics uses the gclid (AdWords) or dclid (DoubleClick) to join against user or click level data from other tools.
The "fbclid" parameters on the other hand seems intended to track individual clicks. That is, Facebook wants to keep tracking individuals when they follow links to off-site pages.
Browsers will now have to resort to removing query parameters to prevent tracking. And websites should really use click-to-enable sharing buttons to prevent Facebook from snooping on everything.
Maybe among the URLs shared on Facebook there are a few whose servers only respond to a fixed amount of parameters, changing their behaviour when additional unused parameters are appended to the query string, but I imagine that the number of such cases is so low it's not even worth considering.
What exactly is Facebook breaking, in your opinion?
Would Facebook also break things if they were instead making an async request to the destination and appending a custom header to it, something like "X-Coming-From-Facebook"?
I don't get the part about async requests. What's the scenario?
Extra headers are typically ignored, not only since different clients send different headers since the beginning.
I know multiple systems which however decode the query string and complain about unknown options or don't accept a query string at all for some resources. On the later case it is ignorance on the other case it is intensive input validation.
Notably, most links with GA marketing parameters are under the control of the website owner. Facebook links are not. This makes such a work-around less feasible.
An incredibly small number of sites might already be using `fbclid` internally, and an even smaller number won't be able to update their sites.
I am totally on board the don't-break-the-web train, but this just doesn't seem like a problem to me. Maybe once stats come out we'll see that it's a bigger issue, but... I kinda doubt it.
I entered: https://pbs.twimg.com/media/Byv5uWSIIAEf38C.jpg
So sure, there may be an argument that the server should ignore that param. But it's absolutely false to say it "isn't breaking anything".
I would consider it to be pretty bad practice to treat query params this way, extra query params should be ignored. However, web standards are descriptive, not prescriptive. If enough sites have strict requirements about the number of query params, then it doesn't really matter what good practice is, and Facebook should accommodate this by moving back to cookies or at least make it opt-in or something.
After all, as a developer, it's my site - I choose the URLs (including the query strings) which are valid and acceptable to me.
I wonder how many sites other than Twitter are rejecting requests with unknown query parameters?
Adding a request parameter absolutely will break things. And they knew this. The only question was what's worse: Not being able to track some people or breaking some of their links. Facebook decided the former is more important to them and their customers.
And even if nothing else brakes, uglifying the URL people are posting is in itself an anti-feature.
That's a new one for me, I need to make sure I remember it in the future. From what I'm seeing online, it's not even necessarily considered bad practice, so... I dunno anymore.
But agreed, Facebook should back this out.
> Facebook should back this out.
They won't. They knew what would happen and did it anyway.
This hn thread is a perfect example of a news bubble. Googling "fbclid" returns the answer in the first result, but hn votes up an article that has no information and treats it as some secret tracking that fb has implemented. HN is excessively biased against any discussion of tracking/analytics on the internet. The community allows no room for true discussion - only blatantly biased opinions.
Edit - reworded to be less aggressive
According to the metadata for the site, it was originally published on 2018-10-14 and last updated 2018-10-16.
Facebook's own article about the feature came out 5 days after this article was published. So, at the time, Facebook _was_ being secretive about it. Aside from that one line, the entire article reads more like "this is new, I wonder what it does".
Lastly, when I googled "fbclid" the top 3 articles are completely unrelated to Facebook (but, then, I'm not in marketing so this doesn't surprise me) and the forth is this very article.
The first link (for me) when googling fbclid is the reddit post in r/analytics I linked to, which doesn't have a ton of info but gives more than what the author had. Though you're correct, it was posted after the author originally posted, and I can't fault him/her for not checking in again a few days later.
Now the interesting question will be whether "fbclid" can be tied to individuals. And I couldn't readily find this info in the links you posted. Maybe I'm bad at reading?
Could someone explain or give a reliable article that explains this well?
Theres no way for them to know whether or not the extra params on the URL change the result page.
(i.e. example.com/index.php?post_id=1 and example.com/index.php?comment_id=1 could be very different pages, or they could be the same; you don't know).
So in comes the canonical url!
This tells Google the proper url required for a specific page.
That way if Google gets to a page using two different urls, it can tell that they are the same page.
You can list it by adding a tag to your HTML head.
You can even do face things like rewrite urls entirely (i.e. If the crawler hits example.com/?category_id=1&item_id=2, you can correct the ugly url by listing the canonical url as example.com/category/1/item/2)