Hacker News new | past | comments | ask | show | jobs | submit login
Facebook showed me my data is everywhere and I have absolute no control over it (buzzfeednews.com)
420 points by hhs 14 days ago | hide | past | web | favorite | 188 comments



> These advertisers are running ads using a contact list they or their partner uploaded that includes info about you. This info was collected by the advertiser or their partner. Typically this information is your email address or phone number.

Newsflash, BuzzFeed's ad networks onboard and sync information about their readers in exactly the same way. Do people think that BF loads Quantcast, Scorecard Research etc onto their site for the good of their health?

That's not to mention the viral quizzes and other data gathering exercises that BuzzFeed invented and sold as a product.

I have no idea why people don't call BS on news orgs that run 'exposes' on Facebook's ad targeting, when they target, track and follow their readers in exactly the same way.


I have no idea why people don't call BS on news orgs that run 'exposes' on Facebook's ad targeting, when they target, track and follow their readers in exactly the same way.

Three reasons:

1. In most responsible news organizations, there is a significant firewall between advertising and news. In two places I worked, the sales people weren't even allowed to go into the newsroom.

2. News organizations are not IT departments. A reporter is not involved in how a web site is built, or what goes on behind the scenes. There are different people in different departments for that.

3. Just because a newspaper does something bad with its web site does not make what happens at Facebook/Google/etc... any less bad. SV needs to get over the whole "But Bobby jumped off the bridge, too!" mentality.


1. No one implied that this advertising was impacting their reporting.

2. They're reporters... if they don't know what's going on in their own backyard, there's a serious issue.

3. Maybe, but there's something ironic about the fact that this very reporting only exists because of these ad practices. This person most likely wouldn't have a job if the very practice they are reporting on didn't exist.


Back before the age of digital photography, there was a common issue with animal rights activists and photography. On one hand, photography required gelatin, an animal product, but on the other side, photographs of animal cruelty played an incredibly important role in spreading their message.

It is an interesting line of thought to consider these compromises and when they may cross the line. Which means are justified by the ends? Which means are not?


>2. They're reporters... if they don't know what's going on in their own backyard, there's a serious issue.

The point is that it's not really their back yard. It's a different office on a different floor, possibly even a different subcontracted company than where the journalists are working. They don't have any more scope to be aware of the ad tech than they do understanding how a CDN works.

By analogy, this is kind of like expecting a journalists in the old days to be especially aware of conditions in paper mills because their papers are printed on paper.


I don't see how the "location" matters. I'm arguing that it's a similar subject. If you're investigating advertising practices, I'd hope that you know quite a lot about that subject in general, which includes how it's done at other companies.

How can you talk extensively about the way Facebook does advertising without knowing how its done everywhere else? Doesn't that make you a bad journalist?

By analogy, it'd be like if someone was reporting on how a burger at McDonalds tastes, without having eaten a burger anywhere else.


> The point is that it's not really their back yard.

If you're going to research a topic and some of that research turns up as "Lots of websites run these ad networks" it is a necessary step to ask "do we do that too?".

So there's two options here:

1) The reporter knew the parent company does this.

2) The reporter doesn't know.

In case 1, they either don't talk about it because they are blocked or they don't think it is necessary (which I'd argue is at best misleading and at worst malicious). For case 2, well that just doesn't seem like good reporting.

So I think your analogy doesn't work. It isn't because "people don't know tech" but rather "are journalists asking the right questions and doing good research." If the latter is poor because "they don't know tech" then we have a major problem and as the tech class we need to call them out on it (create a feedback loop to solve the problem).


Taken further, it's like writing articles about pollution from paper mills.


#3 Just argumentatively, it seems a bit like an ad hominem. Yes buzzfeed does tracking and everything but that doesn't make their argument against Google/Facebook any less valid.

> This person most likely wouldn't have a job if the very practice they are reporting on didn't exist.

Buzzfeed has picked up $496 million in VC funding [1] since its inception. It's a catch-22 where they could absolutely have done this reporting without ad-based monetization, but the potential for- ad-based monetization is why the VCs parted with money in the first place.

[1]https://www.crunchbase.com/organization/buzzfeed#section-fun...


> Just because a newspaper does something bad with its web site does not make what happens at Facebook/Google/etc... any less bad.

Absolutely it does. The prevailing narrative is that SV tech companies are innovators in this space, invading your privacy in ways that normal companies would never dream of. "Facebook engages in industry standard data collection" is a very different story, even if the industry standard is still bad.


I think this is an example of Tu Quoque[1] logical fallacy.

[1] https://en.wikipedia.org/wiki/Tu_quoque


If the parent had made the claim that it wasn't wrong at all because BF does it, then this would be relevant.

However the parent's claim is subtly different: it states that the industry practice of data collection is not unusual, not that it's not wrong.


Are you just making an observation or do you think that disqualifies the parent's point somehow? If so, your comment is probably an example of the argument from fallacy fallacy. [0]

[0] https://en.wikipedia.org/wiki/Argument_from_fallacy


> Absolutely it does.

How do you figure? It seems to me that even if the news organization does everything Facebook/Google/etc. does, that doesn't make Facebook/Google/etc. less bad. It only makes the news organization equally bad.


One of the concerns people raise is that Facebook, Google, etc. are lowering the bar. That they're uniquely bad at privacy, or uniquely brazen about violating it, and user privacy is becoming harder to achieve because of their actions.

If everyone else also violates privacy in the same ways, this aspect of the narrative isn't true.


"Less bad" and "equally bad" are incompatible statements, you're claiming both are true


‘Less bad’ here compares one valuation of Facebook's actions to a previous one, whereas ‘equally bad’ compares Facebook's actions to those of Buzzfeed. So they can be both true (regardless of whether they actually are).


By "less bad", I thought the comment was saying that Google/etc. actions are less objectionable. That's what I don't think is true. If I have misunderstood, then I apologize.


Indeed. The minimum they can do is properly disclose their own position in this, because it makes them appear very hypocritical.


The even sadder part about this article... Buzzfeed does this worse. So it's not only the "pot calling the kettle black", it's someone calling someone out on things they do better than themselves.

Reality is, that both Google and Facebook have cleaned up the AD industry ( but not enough, of course ).

Data brokers continue to be the worst actors in the Ad industry.


You can't have it both ways. The buzzfeed platform is tracking/collecting. Either the reporter knows this and is ignoring it or is unaware which makes the article seem less creditable.


Or they know about it and were unable to enact change before publishing.

Publishing articles with the same poison still raises awareness of the issue, though it would be best for them to disable ads on these articles, or to at least be upfront.


yes it does. you don't get to criticize someone for an action while performing the exact same action and expect your argument to have a shred of credibility


No it doesn't. The argument speaks for itself. If I murder someone in cold blood, then get up the next morning and call a press conference saying "Murder is wrong! We need to put a stop to murder!" no one would chime in and say "No way, murder is perfectly fine, you just did it last night!"

Call the news orgs out for doing the same reprehensible activity, yes. But let's judge the arguments about which activities are reprehensible on their own merits.


Hypocrisy comes from the Greek word for pretending or acting. You don't usually listen to hypocrites on an issue, because they lack the character to have a valid stance on the issue. If a murderer kills someone and then says murder is bad, we know they're full of crap and don't actually believe murder is bad, they're simply saying that to appear remorseful and less guilty..

In this situation Buzzfeed is pretending that this tracking is bad to generate more clicks so that their ads can invade more privacy to generate more income, exactly like Facebook. If they actually thought it was bad, their business model wouldn't rely upon it.

Only suckers believe Buzzfeed actually thinks this is bad.


I think it's important to acclaim arguments against bad behavior, even when made by people who are doing that same bad behavior.

(1) Having good arguments out in the world is a good thing.

(2) Positive feedback tends to reinforce behavior - and we want to reinforce news orgs making good arguments against bad behavior.

(3) After you have acclaimed their argument, it's much more effective when you use it against them.

If you just constantly snipe at people and organizations, I think it becomes much harder to effect change.

In "Phaedo," Plato has Socrates give a lesson about misology - that is, hating all reasoning and arguments, and assuming they are bad because you can always find flaws and it's so difficult to reach certainty. It's the intellectual version of misanthropy. Socrates cautions strongly against misology. Just because there are problems with an argument, doesn't mean we shouldn't continue to try to improve and take the good parts from every argument we can find, no matter how flawed the other parts (or the speaker) are. I think this is an important lesson in these times. I think we should stop fighting over whether a person or organization is allowed to say certain things, and concentrate on what parts of what they say should change our behavior.


I agree with most of what you say. But Socrates not only spoke of Logos he spoke of Ethos and Kairos. Bad people making good statements can often weaken an argument because no one wants to associated with lying aholes. I’m sure Hitler said something good about something, but I’m sure as hell not using his arguments.


> I’m sure Hitler said something good about something, but I’m sure as hell not using his arguments.

That's ridiculous. If Hitler once made a good argument about something, that doesn't invalidate usage of that same argument by other people.

I mean really. I can't even think of a hypothetical example where that would make sense.

And Hitler fucking sucked, fuck that guy.


>we know they're full of crap and don't actually believe murder is bad, they're simply saying that to appear remorseful and less guilty.

Actually, no. The world is full of people who do bad things while believing they are bad.

For example, I engage in a lot of practices that I think are harmful for the environment. I buy food that I suspect/believe is causing the environment harm. I buy products made with packaging that I believe causes harm. But I'm still openly against those practices. And if a ballot measure comes up to ban products involving those practices in my city/state/country, I'll vote for it.

There's no inconsistency here. The notion that one is required to practice what they preach is a flawed one. Knowledge and practice are orthogonal. People do bad things because they get value out of them. Knowing it is wrong doesn't suddenly reduce the value one gets from them. My food tastes the same regardless of my knowledge on the harm it does to the environment.


If a murderer kills someone and then says murder is bad, we know they're full of crap

Only if we already know they've murdered someone before they advocate against murder. Until we do find out, the murderer appears the same as anybody else, despite being a hypocrite.


Your analogy only makes sense if Buzzfeed stopped using the same practices. If you continue to murder people while shouting "murder is bad" then you might be correct but you're still a hypocrite.


Yes, but the more important thing is that murder is wrong, not that someone's a hypocrite. Hypocrisy is the most minor of malfeasances.

A smoker who writes a column that smoking is bad is still doing good writing.


This argument is very common against people with fringe political beliefs. "If you don't believe in property rights, why do you have a car?"


That's different. If you don't believe in property rights, you think that the law is fundamentally broken, not that any individual who owns property is evil. This article is basically doing the latter, which is why it's difficult to take seriously.


It's common against everyone. "You believe in climate change? Yet you sometimes use a car? Checkmate!"


...which doesn't invalidate the argument.

A charge of hypocrisy is a straight-up ad hominem. It might make you mistrust the person making the argument, but it doesn't say anything about the facts.


This post is completely correct: yes, you're correct and you're a hypocrite.

But: so what?

When I read the news, the last thing I care about is the mores of the news organisation. Reporting on a politician pilfering taxes? Couldn't care less if the reporter panama papered his way through life. Writing about a politician illegally snooping on citizens? Don't give a damn if you got that info by illegally snooping on the politician. Is the news true? That's all I want to know.

I read the news for the news. Are the facts solid? Good. Are you crooked? Don't care.


> then you might be correct

The comment I was replying to seemed to indicate that the Buzzfeed author's stance was incorrect. This is a fine point, but really important, and I think it gets lost in public discourse a lot these days: hypocricy does not invalidate an argument.


Whether or not you are a hypocrite has no bearing on whether or not what you are saying is correct.


If you were obviously a murderer, and then publicly railed against murder, it would make it seem suspiciously like you were up to something, which would tend to make people doubt what you're saying.

It doesn't make what they're saying automatically invalid, but knowledge that the person saying something is obviously a hypocrite is a form of evidence about how likely what they're saying is true.


No one is attacking the actual argument. We're attacking Buzzfeed's belief in their own argument. We're saying they're like the murderer, who once caught, says murder is bad in the hopes of leniency. Only Buzzfeed is saying privacy violations are bad in the explicit hope they'll get to violate more privacy. Worse than hypocrites, this makes them scoundrels.


I couldn't care less whether "Buzzfeed" believes their own argument or not.

I'm putting the quotes there because Buzzfeed is not a singular entity with a set of beliefs it identifies with. It's also not like a cult where everyone has to believe the same ideas. Which is why it is nonsensical to ascribe hypocrisy to a non-person entity ...

And that is exactly how you can get an article against adtech written for and published on a platform that is using said adtech similar to what the article speaks out against.

The worst part of it all is IMHO that the entire top of this discussion is dominated by arguing about whether Buzzfeed is hypocritical or not, instead of discussing the actual topic at hand.


I think you missed the point. The reason the top of the discussion is dominated, as you say, is that the argument is obviously true to them. The more interesting story is Buzzfeed arguing that it is true, with the takeaway being: are they ran by horrible people, or just don't really believe it is true.

|no one would chime in and say "No way, murder is perfectly fine, you just did it last night!"

No, but they might say: "Why should we listen to you!? MURDERER!" Then proceed to stone you to death, and see who dares get up on the podium to make an announcement about stoning.


No, we wouldn't believe that you think its bad since you did it. We still know its bad.


A counter example: If I was a pack-a-day smoker and tell you smoking is bad for you, and that you shouldn't do it as it's bad for your health based off XYZ studies, that doesn't mean my statement is poor.


Why not, may I ask? Just means both sides are wrong.


You’re arguing about credibility like it’s interchangeable with something being factually correct.

There are different constructs.


Its not exactly the same, If I'm doing it, It might be benefiting me. If other do it might harm me.


> 1. In most responsible news organizations, there is a significant firewall between advertising and news. In two places I worked, the sales people weren't even allowed to go into the newsroom.

Do reporters not get any access to their analytics?


It doesn't make it any less bad, but it does make me question the motivations of the reporter.

Either A) They know what their own website is doing and think it's OK, in which case there may be a conflict of interest. If that's the case, we should take any suggestions they have about how to resolve the problem with a gain of salt.

or B) They don't know what their own website is doing, in which case they've done a pretty poor job researching their own article. I don't expect a reporter to be a tech expert, but I do expect them to be inquisitive enough to think, "when people say that advertising pays for reporting, might they conceivably mean the same thing as when they say that advertising pays for Facebook?"

Facebook's privacy violations are bad. The hypocrisy of journalists calling them out doesn't change that. But in the privacy community, it should make us suspicious of their articles for the same reason we should be suspicious of a scientific study of sugar paid for by the meat industry. It should push us to double check their claims and question what their motivations are.

These are people who's interests either happen to align with ours temporarily and that might change in the future, or they're allies who haven't done enough research to be able to make strong claims about what our responses and policy changes should be.

It's not that Buzzfeed is wrong about Facebook. It's that they're not currently in a responsible or knowledgeable enough position to lead the conversation about Facebook.


This is true: hypocrisy is not a proof of incorrectness. Smoking does not become healthy simply because a smoker told you it wasn't.

On the other hand, if adtech practices are bad (and they are), then it is not only Facebook that needs to change.


If only journalists were trained to investigate and learn information from organizations outside their own...


> In two places I worked, the sales people weren't even allowed to go into the newsroom.

And the journalists in turn aren't allowed to look at their own websites when researching a story?

> A reporter is not involved in how a web site is built, or what goes on behind the scenes.

But they do talk to IT people when they write stories about IT things, don't they?

I'm all with you that it doesn't change the story's correctness one bit, but it does question the self-awareness of journalists as part of companies. Unless it's a secret move to get people to focus on the stuff their organisation does which they can't publicly comment on for fear of being sacked (by the ads people, presumably), that would be quite different.

edit: I'd love to reply, but somebody in their limitless wisdom has decided to time me out for whatever reason.


But they do talk to IT people when they write stories about IT things, don't they?

Generally, no. They talk to people on the outside.

Unless it's a secret move

It's always amusing to me the conspiracy theories people have about journalism. These are people who boost their egos, their living, their careers by telling people things. The changes of hundreds of thousands of journalists keeping a secret about anything are pretty close to zero.

fear of being sacked (by the ads people, presumably)

This only very rarely happens (It happened to me once, and then the company contested by unemployment claim!), but only at very small organizations. Like where the entire newsroom is fewer than ten people.


> 1. In most responsible news organizations, there is a significant firewall between advertising and news. In two places I worked, the sales people weren't even allowed to go into the newsroom.

Editor: "So your last screed about Facebook got a ton of clicks and shares, can you do another?"

Tell me more about that firewall. In modern news orgs, you're selling the words next to the content, not the readership.

> 2. News organizations are not IT departments. A reporter is not involved in how a web site is built, or what goes on behind the scenes. There are different people in different departments for that.

Journalists are inquisitive, intellectually curious types. You have to call into question the ethics of someone prepared to write about something they have a good idea that their own company engages in.

> 3. Just because a newspaper does something bad with its web site does not make what happens at Facebook/Google/etc... any less bad. SV needs to get over the whole "But Bobby jumped off the bridge, too!" mentality.

This is just tu quoque. It's not exonerating Facebook at all (though Facebook typically have a deterministic data set to work with, which reduces the kind of shady practices they need to engage in with third party data sets), it's merely saying that you don't get a free pass on calling this kind of stuff out if you're engaging in the same kind of data gathering yourself.


> Journalists are inquisitive, intellectually curious types. You have to call into question the ethics of someone prepared to write about something they have a good idea that their own company engages in.

So what is the ethical thing to do? Quit your job? Choose not to report on the unethical behavior? Escalate internally instead of externally? Write the story about the company you work for, knowing full well that you won't ever be published and it will end up being pointless?

I don't think anybody wants to give buzzfeed a free pass, but I just don't see how buzzfeed's data practices reflect on the ethics of their individual journalists.


The cost of an action does not change its ethical merit, or lack thereof. People are responsible for doing unethical things, even if someone ordered them to.


But this guy isn't even being ordered to do anything unethical. He just wrote an article.

The same people that ordered him to write this article have also ordered other people to do unethical things. But that's a different scenario than you just mentioned. So if you think this is immoral, than I imagine that your actual argument is more of a guilt by association thing.


>> These advertisers are running ads using a contact list they or their partner uploaded that includes info about you. This info was collected by the advertiser or their partner. Typically this information is your email address or phone number.

> Newsflash, BuzzFeed's ad networks onboard and sync information about their readers in exactly the same way. Do people think that BF loads Quantcast, Scorecard Research etc onto their site for the good of their health? ...

> I have no idea why people don't call BS on news orgs that run 'exposes' on Facebook's ad targeting, when they target, track and follow their readers in exactly the same way.

What you're advocating is basically a circular firing squad.

If you have to be impeccably pure to call out a wrong (and furthermore, only associate with impeccably pure people and organizations), then no one will ever call out any wrongs and they'll fester and grow.


I think it's possible that the author doesn't even know this. Does BuzzFeed have an interface where a user can go to see this information?

The level of understanding that most people have about online tracking is just really, really low, and this author probably wouldn't know about any of this if Facebook didn't have that particular page.


EFF's Privacy Badger, NoScript, Unlock, AdblockPlus has many lists, and many many more add-ons can show you all the nasties in each page, and you click-block them once and for all. Same add-ons are also for the Firefox on Android.

Edit: yes most likely the author is someone that may be seldonmly writing for the site, or just sells his texts and doesn't really pay attention/care where this is posted.


OP here. Please note that the author is a senior reporter for BuzzFeed News, as cited here: https://www.buzzfeednews.com/author/katienotopoulos.


That just makes it worse imho.


Yes I use many of those. I whitelist every domain allowed to execute JavaScript on my machine, and for the most part I get a better browsing experience for it.

So with regards to BuzzFeed I'm very aware of what garbage their site is. Right now on BuzzFeedNews I see 9 third party domains running ads or tracking pixels via JavaScript.

Ironically, one of those 9 is Facebook. So Facebook is tracking the people reading this article about how Facebook is tracking them.

But most people aren't technologically savvy enough or can't be bothered to run tools like these, and just browse the internet with basically no understanding of what is happening as they do.


I know & use three of those four addons, what's Unlock?


apologies for that, my autocorrect kicked in. I wrote uBlock and my phone did its own thing, and I missed it when I read it again prior to hitting 'reply'. It's called 'Muphry's Law' [1] and I fell right in it!

[1]: https://en.wikipedia.org/wiki/Muphry's_law


NoScript shows buzzfeed.com when you initially load it, when you (temporarily) accept that domain you get lots of others... including facebook.net. Allowing any one of those (which I didn't do) may bring up others in this very fun game of javascript domain whackamole. Using that extension or one of many others that work similarly will enlighten even the most technophobic of journalists as to how this all works.

I find it hard to believe they looked into the subject and didn't run across these. But I suppose it's possible.


If as a reporter investigating online advertising, you don't know what's going on in your own backyard, there's clearly something very wrong going on.


If that's the case, the author shouldn't be writing articles about online advertising.


why? what bearing does that have on any factual information being communicated?


Expertise and experience in the field and with analytic tools.


> I have no idea why people don't call BS on news orgs that run 'exposes' on Facebook's ad targeting, when they target, track and follow their readers in exactly the same way.

Not only is the media guilty of many of the popular tech company sins, but they originated them.

* Putting us all into filter bubbles where a few big players control and curate the information seen by the masses? The media did it first.

* Using the aforementioned powers to affect elections and democracy, sometimes adversely? The media did it first.

* Advertiser-supported business models? The media did it first.

* Obsessively optimizing the product for reads/clicks, thus leading to sensationalist clickbait headlines, an overabundance of negativity, and an extreme focus on novelty above all else? The media did it first.

The media doesn't like tech for the same reason you almost always see when any two industries feud with each other: because they're competitors and one feels threatened by other.


Hypocrisy aside, news publications also have significant incentive to go after Facebook because Facebook has become their biggest competitor for attention. You don't need to visit NYT or WaPo for news if your social circle keeps you up to date on the issues you care about. The substitution of real investigative journalism for unverified word-of-mouth stories via acquaintance is definitely something we should be concerned about as a society, but it's also important to recognize that there are significant profit motives at play that should also be examined.


I have no idea why everyone isn't blocking all ads, trackers, beacons, or even using Facebook to begin with. Everyone knows what they are about. An ad-free Internet is a good Internet. Personally I have not seen a single ad online on my computers or mobile in years. You already pay to use the Internet. Keeping a website alive is the cost of doing business, not tracking people and allowing their ads to infect, slow down, or interfere with content.


Right. You'd expect the reporter to compare Facebook's disclosure page with BuzzFeed's disclosure page.

And if BuzzFeed has no such disclosure page, you'd expect the reporter to mention that, and credit Facebook more strongly for having one.



> Yet all of this journalism was paid for, in part, by The Times’s engaging in the type of collecting, using and sharing of reader data that we sometimes report on.

Nice passive voice. A writer good enough to write for the New York Times only writes a sentence that passive and convoluted for a very good reason.

"We track, use and share the data of you, the reader, and sell this to advertisers."


BuzzFeed and BuzzFeed News aren't really the same thing. They switched to a seperate url because a lot of people got confused.

Though even if they weren't that wouldn't really invalidate the points made in the article.


It's definitely fair to criticize Buzzfeed etc for these practices, but don't want to stop the reporting on facebook because the buzzfeed business model has overlap. That would surely reduce the total amount of scrutiny on such practices because of all the sites that can't throw stones from their glass houses.


You might be interested in this:

https://www.crowdtangle.com/privacy

The company's clients include many news publications like Buzzfeed. This company is part of Facebook.


And, if people are so upset about it, just run uBlock and Ghostery, like I do.


More to the point building up lists of people's phone numbers and physical/electronic addresses has been how direct marketing has been done for literally centuries. From wikipedia:

> In 1667, the English gardener, William Lucas, published a seed catalogue, which he mailed to his customers to inform them of his prices.

I just have trouble seeing how adding computers to this practice is the line that is too far for people.


For me: there's a world of difference between giving someone your private data and having third-party services (Scorecard, Quantcast, Criteo, Tapad, others) give it to them.

Just because I happen windowshop some store doesn't mean I gave them permission to ask for my info around. They're free to ask me. But that's just me.


FWIW, I wrote a little script back in Feb that you can use in the console on that Facebook advertisers page to auto click "remove" on them all for you.

Here is the github gist: https://gist.github.com/bluetidepro/bfa60c1d63925180daf3dd53...

I run it about every month, and it's crazy how many get added in that time span. There is literally thousands and thousands from brands/companies/etc. that I've never heard of it. It's insane.


Nice job. It looks like Facebook Purity has this built in also.


The author keeps saying 'my data' when the data isn't theirs. The fact that they interacted with a party, that interaction, and who you are, is thier data.

This isn't a theoretical nit pick, this is the law. And it makes sense. If you and i have a conversation, i am free to tell someone else about that conversation, because i was part of it. That is my data, as it is also theirs.

In casual use, if we agree not to tell anyone, that is different, but the default is both parties own it.

This the simple minded use of 'my data' really wrecks what could be a rather interesting ( but not surprising ) piece.


> This isn't a theoretical nit pick, this is the law.

Of course, the law is changing... and in fact, under a lot of laws, it really is "my data".

Much like intellectual property, these laws were established for a particular context, and we've long since moved past that context, raising reasonable questions as to whether those laws are really a good idea.

> And it makes sense. If you and i have a conversation, i am free to tell someone else about that conversation, because i was part of it. That is my data, as it is also theirs.

Are you entitled to a DNA sample from some skin cells that floated in the air while you were having said conversation? Does that conversation entitle you to know of every other place that person goes that day? Are you entitled to share a full recording of that conversation with anyone you'd like?

The lowered price of collecting, aggregating & exchanging information changes the context under which the existing laws were made, raising questions about whether we wish to alter those laws.


Exactly. Here's an example of the type of law you are talking about.

I live in a all party consent state, so if you want to record a conversation with me, legally, you need to have my consent.

Some states are one party consent states. You can record any conversation you want.

Across state lines, federal law requires only consent of one of the parties. But that doesn't resolve anything. Courts in different states handle cross-border complaints differently, some enforce their local all party standard, others demure to the other jurisdiction's one party laws.


>Are you entitled to share a full recording of that conversation with anyone you'd like?

You already have no expectation of privacy in a public space, though.

Today I overheard an HR person talk about a big tech firm's recent VP tenures and prospects on a bus. I looked up a couple of linkedins before my earbuds charged enough to put an audiobook on. Not that I'll ever contact them, I just couldn't believe someone would be so genuinely unprofessional and I was bored.

What a world we live in, where corporations can gossip just like people.


> Are you entitled to a DNA sample from some skin cells that floated in the air while you were having said conversation?

Yes. Why wouldn't I be. Those DNA are in my body. They might be infecting me in some way. Why would I not be able to inspect them however I want?

> Does that conversation entitle you to know of every other place that person goes that day?

If I'm with them the entire time then yes. If other mutual friends were with them I'm fully entitled to ask them and they are fully entitled to answer.

I went to Disneyland with my sister yesterday. I just shared with you that my sister was at Disneyland yesterday. My mom told me she and my sister went to dinner at McDonalds on the east exit of Disneyland after they got out. I just shared with you something my mother told me about my sister. I have broken no laws nor done anything considered wrong AFAIK in any country in the history of the world up to this point.

> Are you entitled to share a full recording of that conversation with anyone you'd like?

Personally I believe the answer is effectively yes and should be yes though I understand it might legally be no in certain places at the moment I believe those laws will eventually be overturned. Why do I hold this position? Because my brain recorded the conversation. So, first off if I have good memory I can dictate the conversation. Second, if I can make a machine to pull that data out of my brain it doesn't feel like the law can tell me I can't. It's my brain and my memory. Further, it's arguably we will enhance brains digitally at some point. First for people with brain disability. At that time it will be trivial for them to digitally extract their memories and being their memories again the law should have no say. In other words, tech will eventually make this question moot. Conversations will be recorded and just like I can tell you that while at Disneyland yesterday my sister said she was going to Hawaii in June (I just shared a lo-fi recording of that conversation with you) eventually I'll be able to do that with hi-fidelity.

Let me add I'm a little scared of such a world but I personally see it as inevitable. I believe digitally augmenting brains is inevitable and I believe telling people what they can do with their personal memories and who they can share them with is impossible/untenable so that world will come eventually.

Let me add though that I'm not against laws that say such data can not be collected in mass quantities. I have no idea how to word those laws so as it's possible for me to share all the data mentioned above with whoever I choose and yet not allow FB to do the same and also still allow services to help me share that data with who I choose to share it just like HN just facilitated me sharing info about my sister with you above.


Regarding "records of conversation": The difference is that when you tell a memory, it is not proof of it actually happening. If I claim "Hubert said he stole a car", it is far less relevant to anyone interested than an audio recording of him saying "I stole a car".

I 100% agree with this post up until the bit about recording conversations. We don't have computer-augmented brains yet, and until we do, remembering something is absolutely not the same as a true digital recording.


...and once we have a way to do it, there's no reason that we'd enjoy the same freedoms with it that we have with extant brain function.


why?

example: blind person has cameras installed for eyes. Are they no longer allowed anywhere there is a no camera sign or is it seen as the restructing their rights?

Can a person with long term memory issues augment to record so they can review later? Will you need to DRM things to an insane level to prevent them from sharing? Why should my augmented brain be under someone else's DRM ?


If the replacement eyes shot high powered lasers out of them, you might find there would be some regulatory hurdles there.

> > Are you entitled to a DNA sample from some skin cells that floated in the air while you were having said conversation?

> Yes. Why wouldn't I be. Those DNA are in my body. They might be infecting me in some way. Why would I not be able to inspect them however I want?

So, by that interpretation, we don't have much of a right to privacy.

> > Does that conversation entitle you to know of every other place that person goes that day?

> If I'm with them the entire time then yes. If other mutual friends were with them I'm fully entitled to ask them and they are fully entitled to answer.

I'm not saying you're with them.

> I went to Disneyland with my sister yesterday. I just shared with you that my sister was at Disneyland yesterday. My mom told me she and my sister went to dinner at McDonalds on the east exit of Disneyland after they got out. I just shared with you something my mother told me about my sister. I have broken no laws nor done anything considered wrong AFAIK in any country in the history of the world up to this point.

But what has changed is that one can know with far more precision all of this stuff, at far lower cost, and without your conscious help. That's a very different context that maybe requires some rethinking of our laws.

> Personally I believe the answer is effectively yes and should be yes though I understand it might legally be no in certain places at the moment I believe those laws will eventually be overturned. Why do I hold this position? Because my brain recorded the conversation. So, first off if I have good memory I can dictate the conversation. Second, if I can make a machine to pull that data out of my brain it doesn't feel like the law can tell me I can't. It's my brain and my memory. Further, it's arguably we will enhance brains digitally at some point. First for people with brain disability. At that time it will be trivial for them to digitally extract their memories and being their memories again the law should have no say. In other words, tech will eventually make this question moot. Conversations will be recorded and just like I can tell you that while at Disneyland yesterday my sister said she was going to Hawaii in June (I just shared a lo-fi recording of that conversation with you) eventually I'll be able to do that with hi-fidelity.

You are working from a common, but mistaken, model of how memory works. There is a world of difference between what is recorded in your mind and a full recording of the conversation.

> Let me add I'm a little scared of such a world but I personally see it as inevitable.

It may be inevitable, but I don't think either of us have enough context to determine that. There was a point in time where things like copyright protections seemed impossible, and now they're so etched into our legal structures that we have a hard time imagining a world without them.

> I believe digitally augmenting brains is inevitable and I believe telling people what they can do with their personal memories and who they can share them with is impossible/untenable so that world will come eventually.

Well, we tell people where they can and can't pee, whether they can eat something, where they can't sleep, what they can't take, what they can't kill, etc. The difference between anarchy and society is pretty vast and seems more inevitable than anarchy.


> So, by that interpretation, we don't have much of a right to privacy.

Not in terms of DNA, no. We leave traces of our DNA wherever we go, and if someone takes the time to read it, they'll learn what they learn.

What is the alternative? Outlaw DNA sequencing? Only allow it if you know the owner? Even if it's on my property?

> I'm not saying you're with them.

Then how would you know every other place the person went that day? Either they told you (not a privacy violation, they told you), or you were with them. Or you installed a hidden camera, in which case the data was clearly stolen and wholly unlike when someone knowingly shares contact information (which was how this analogy originated).

We're in agreement about memory recording, so no comment on those points! :)


> Not in terms of DNA, no. We leave traces of our DNA wherever we go, and if someone takes the time to read it, they'll learn what they learn.

Which also means you don't have a way of preventing all your movements from being followed, much of your health information being public knowledge, etc.

> What is the alternative? Outlaw DNA sequencing? Only allow it if you know the owner? Even if it's on my property?

The alternative is to establish rules and restrictions for circumstances for when DNA sequencing can be done... In fact, we already have some rules about that.

You might think legal restrictions are ridiculous or impractical, but are they any more ridiculous or impractical than say copyright protections in an era when the copying & distribution of data is so frictionless and legitimately necessary?

> Then how would you know every other place the person went that day?

So many different ways... Tracking their DNA trail. Having a drone follow them as they move. Hacking their phone over bluetooth (hey, it's just emitting low power radio signals, are you going to ban that? ;-). Touching them with some radioactive tracer and then following its trail. Tracking radio transmissions from their phone. Planting a GPS tracker on them. Running a retargeting ad campaign and collecting the logs. Track their mass & its movements by analyzing how it alters the flow of neutrinos. :-)


> Or you installed a hidden camera, in which case the data was clearly stolen

I think this is more analogous to the present situation. Facebook has a public 'camera' - they record you every time you're on their site. It's up to them if they want to share what you did on their site - that data belongs to both of you.

But their camera is also on every site that you visit that pulls in FB resources - I would consider this a hidden camera.

When FB shares this data, they're sharing the composite recording of their one public and many, many hidden cameras.


Sure, and this is a legitimate problem. But it's not what was being discussed in this thread.

We got here by talking about Facebook collecting contact info that someone else uploaded to them. This, to me, is not analogous to a hidden camera.

If I give my contact info to a friend, and that friend uploads contact information to Facebook, nothing was stolen from me.


This could be true and false in some circumstances I think. For example if I give a phone number to someone and specifically ask them not to share it, and in another situation where I allow a photo to be taken with the express terms that it is not shared online or with any other person... and then someone installs an app like messenger and that app sucks in all contact info and starts uploading photos in the background - sure you could say that they gave permission to share all that data, but I think a majority of people are not understanding what all those permissions mean.

I believe there are a lot of people who have expectations in the way that apps would use data and they are being duped partially by group trust trust / ignorance.

I have spoken to many people who do not understand the terms of use for instagram, and after discussions about such they start to question if they are going to allow their children to continue to use it. I think this is common with many apps and web sites - people have an expectation that video of them taken at target is going to used in certain ways that maintain privacy - not that target it going to be taking images and videos of their kids and selling it off to people behind their back.

These are just a few examples where I think you could say technically it's true, but actually being are being defrauded or some other term, which is essentially stealing from people based on their ignorance.


I see nothing wrong with calling my contact information "my data," because I feel some ownership over how it is used and disseminated. I strongly disagree with the idea that having an interaction with a business gives them a license to tell anyone anything about our interaction, including (especially) my contact information. I agree it is "the law" right now, but I think that is wrong and should be changed.


Your info is your info. The data they collected about you is their data.

Now if only we could oprevent them from sharing/selling their data which contains your info.

I realize that in my argument I am being rather pedantic...but in online text based exchange it is what separates us from the barbarians.


> The data they collected about you is their data.

That's my point. I don't think it should be. They can have access to my data to conduct our business transaction, but I disagree with the idea that my information somehow belongs to them just because they had access to it at some point.


I think this was a really good distinction, because it suggests a course of action. It sounds like you would support a requirement in a consumer contracts that a person's contact information may not be used for marking purposes or shared with others to use for marking.

This could essentially decimate the direct marketing industry, but that might be the desired outcome. Unfortunately governments don't like to pass laws the decimate any industry no matter how small.

I think it is also very valuable because it more clearly defines what you mean by privacy. The privacy violation happens, not when they collect your contact information, or when it is put into a list, it is when it is used to send you advertising which you find unwelcome.


Right. Phrasing it as "my data" shows that they don't own my information. Instead, I own it, but they have it, and it is their responsibility to take care of it. They are responsible if something happens to my data that I don't want to happen (sold to spammers; leaked by hacking). The best course of action for them is to take as little as possible and then forget the information when it is no longer useful, to minimize their risk. This is exactly what I want to happen.


Reminds me of the comedy sketch https://www.youtube.com/watch?v=CS9ptA3Ya9E:

"They took all the money? That sounds more like a bank robbery."

"No, no. If only. 'Cause we could take the hit. No, no. It was actually your identity that was stolen, primarily. It's a massive pisser for you."

"But, it's actually money that's been taken..."

"Yes"

"From you?"

"Kind of."

"I don't know what you want from me other than my commiserations."

"You see it was your identity. They said they were you!"

"And you believed them?"

"Yes, they stole your identity."

"Well, I don't know. I seem to still have my identity, whereas you seem to have lost several thousands of pounds. In light of that, I'm not sure why you think it was my identity that was stolen instead of your money."


The issues then becomes how they collected data from you.

So how do we stop that? And better yet, how to we prevent those who have collected data about us from using it ways we do not consent to.

I too do do not want my personal info being used by anyone I do not explicitly grant access to.


> The data they collected about you is their data.

9 times out of 10 they’ve collected that data without my informed consent and using my computer hardware - which I did not consent to be used for that purpose. It’s computer fraud of some kind, but IANAL.

Edit: If I flip your argument on it’s head, doesn’t torrenting something make the torrented copy of the media “mine” since “I’ve collected it”? Yet we usually call that a copyright violation and illegal.


> The data they collected about you is their data.

It is your data in the sense that it is data concerning you.


According to GDPR, your data is yours. Even if captured and stored by an enterprise. Soon California too.

I'll see your analogy and raise you a begged question.

You're starting from the assumption that a collection of personally-identifiable information is free to share. That isn't necessarily true of such information.

If you and I have a conversation about a trade secret of mine, then you are not free to tell someone else about the conversation. There are forms of information and transmission that we've agreed by law to restrict.

Why shouldn't we include PII in that? (For another example, consider the privacy of medical data.)


>If you and I have a conversation about a trade secret of mine, then you are not free to tell someone else about the conversation.

this seems counter intuitive to me. can you explain?


> This isn't a theoretical nit pick, this is the law.

There are exceptions for things like health information or DVD rentals. It's time we start to think about expanding those rules to any commercial use of data about private individuals.


In Europe we think differently as can be read in article 8 of the Charter of Fundamental Rights of the European Union [0]:

"Everyone has the right to the protection of personal data concerning him or her. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified."

[0] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12...


> The author keeps saying 'my data' when the data isn't theirs. The fact that they interacted with a party, that interaction, and who you are, is thier data.

"My data" might be ambiguous, because it can imply both ownership and association, but in this case obviously means "personal data concerning me".

> This the simple minded use of 'my data' really wrecks what could be a rather interesting ( but not surprising ) piece.

Try to read it again, this time without assuming the least favorable interpretation you can think of.


> Try to read it again, this time without assuming the least favorable interpretation you can think of.

Thank you for saying this.


The difference is you can't have a conversation with a hundred million people in a day and record everything to use in the most efficient analytical way. Additionally Facebook isn't even having a conversation - it's just a space where other people have conversations and fb records everything like a creepy friend in the shadows.

I think it's pretty pedantic to claim the data isn't ours.


Flight data its data about flights. Ocean data is data about oceans. My data is data about me.

These advertisers are running ads using a contact list they or their partner uploaded that includes info about you. This info was collected by the advertiser or their partner. Typically this information is your email address or phone number.

I like how they purposely give a couple examples, omitting the targeting factors that are more likely to freak people out, such as targeting by lists of names and dates of birth


On top of that, it's absurd that someone can add my phone number to their advertising platform to get my name. Unlike email addresses, phone numbers are rather finite, and the number itself is tied to a particular geographical area (for the most part).

On a related note, Facebook makes it so difficult to unlike things, unfriend people, and opt-out of information from individual advertisers. It's obviously hostile design aimed at making it as hard as possible to reduce your advertising value to them.

What Facebook is doing might not be illegal, but I think what they are doing is more unethical than many felonies. The people in charge of these decisions are getting wealthy from them, and they are never going to face consequences for their actions, which is a real shame.


> On top of that, it's absurd that someone can add my phone number to their advertising platform to get my name.

Advertisers don't get any information about you from Facebook ads unless you click on an ad and tell them it yourself.


That's why I never signed up for FB's 2FA program, and give out random fake phone numbers elsewhere, like grocery stores. I'm sure (987) 654-3210 will save you 10% on your tomatoes.

Do they get your name? Still I hate it when all sites ask for phone to confirm my account and then use it to target me.

I use different e-mail for FB and everyday use, yet still there are a lot of companies that target me


This works as contact matching: they provide list of contacts and FB show to those people ads, so advertiser will not gain access to your personal data.


Name and birth date is nothing. The freaky shit is that Acxiom tracks women's menstrual cycles. Advertisers tailor their campaigns depending on what will be best received at any given time in the cycle.


I tried googling for this but turned up short. Do you have a source for this?


I sat in a meeting where this was explained to me. This was before Google existed.

> such as targeting by lists of names and dates of birth

Typically you wouldn't be able to do this.


I don't think you can target people by names or dates of birth at Facebook. IIRC targeting by name (even indirectly) is prohibited by Facebook.


really? that's interesting, because I'm literally running an experiment right now since this whole 'Facebook shows you who's targeting you' thing started.

Facebook explicitly has my name, my "second" birthday (I love all my birthdays equally, especially the ones I give to marketing sites) and a special email address I have literally created only to give to Facebook for my account. The rest of my account info is deliberately empty and I've deliberately locked down as much privacy settings as I can, don't use the site from my phone (Honestly I barely use the site at all) and have opted out of everything I can.

Advertisers are still showing up (one of them is a liquor store even though I'm a teetotaler and I have selected not to be shown alcohol ads, but what can you do?)

either way, something rather fishy is going on.


Lots of things are "prohibited" by Facebook, but slimy ad companies and middlemen do them all the time.

Unless they're Facebook "partners," which means more access, but nobody really knows how far that goes because it's all private.


It's always interesting to me the number of people who come out on HN to advocate vociferously for ownership of their private data, given that HN doesn't even let you delete your account ;)

https://jacquesmattheij.com/the-unofficial-hn-faq/#deleteacc...


I think the distinction is that -- as far as I know -- HN does not collect, generate, or publish data that I'm not deliberately intending to provide.

I would complain if my pseudonymity were compromised, or if content beyond what I intended to share was being broadcast.


> It's always interesting to me the number of people who come out on HN to advocate vociferously for ownership of their private data, given that HN doesn't even let you delete your account ;)

It's true HN doesn't let you delete your account, but I'm not sure most HN account information is really "private", since submissions and comments are all public. Maybe a user's voting history is more sensitive. But it's not like HN (to my knowledge) is aggregating data on its users from other sources.


I can't speak for anyone else, but when I signed up to HN I made the naive mistake of assuming that of course a pure tech website would be implemented correctly.

I was surprised and disappointing when I found out it didn't support deletion, but I'm kind of stuck now. Maybe one day I'll scramble my password and never come back, that's about the closest I can get to a deletion I guess. I check all new websites more carefully these days.


True account deletion was one of the first features I implemented into my web service. I don't think any non-government site that doesn't offer it has a valid excuse. For example with npmjs.com you have to contact their customer support to delete your account, that's just bad UX.

it also requires almost no private info, so, technically, you can stay anonymous (modulo a dedicated adversary). Granted, I agree with your point. HN should try and mask deleted account, but the website is public, and therefore the point is moot.


I suspect HN would honour DMCA requests.

https://www.ycombinator.com/legal/


The Cambridge Analytica scandal was super interesting to me because it reflected an entirely standard and unremarkable data-collection process that I'd seen several times. And yet it filled international headlines. Congress was interested.

It's easy to do. You create some app that has "login with Facebook". That's great for users, right? One less password for them to remember. Then as soon as they log in you make a quick call to the Facebook API, get all their friends, and dump it in some database table.

Even if only a few people log in to your app, you can get a database of thousands of real people.

I don't work for that type of company anymore, but I've been many places where that was bog standard, the very first code you write for a new product.

Does anyone know if the Facebook API has changed now?


Tangent: This is a side-effect to how Facebook exposes Facebook ids. They expose system-wide identifiers. LinkedIn's API exposes user ids that are transformed based on whoever is interacting with the API. My network graph could overlap with your network graph, but the overlapping people would have different ids.


this has been changed for facebook to use app-specific IDs for many years now


Weren't you able to also pull infos like birthday, posts, likes and the online-status of friends of the one who used Facebook to log-in into a 3rd-party site? I think they then limited this to just the list of friends.


It became a massive "scandal" because it was part of the ongoing Facebook-scapegoating campaign that began the day after Trump won the election. There have been like 7 separate episodes in that scapegoating saga over the oas 2 years, and Zuckerberg has had to go public a bunch of times with some new vow to do something to get Facebook under control so that something so heinous doesn't happen again, lol... CNN and MSNBC viewers cannot understand how someone can support Trump unless than person is a white supremacist Nazi or whatever. All I know is that I have a nice $1,500 wager with a relative (who bet me last time as well) that if Trump is IN the election, he will win again. The only thing that could stop him is if he is simply not a candidate on Election Day. I think the main thing that could stop him is his own cholesterol; celebrity physician Dr. Oz said, in a show WITH Trump in September 2016, that Trump has too much of the "bad cholesterol."

https://static01.nyt.com/images/2018/02/18/opinion/sunday/18...


Absolutely no control?

Don't use social media and lock down your browser to limit fingerprinting. Your remaining big threats are phone apps and traditional data brokers profiling your credit card usage. Cut out all unnecessary apps, block everything else with a firewall and pay cash. You will then be far more opaque to the private surveillance apparatus than most first-worlders.


I'm rarely using Facebook. I have an account, even some photos, but that's it. I may scroll the wall from time to time, usually readings arists posts and stuff, as my friends are just like me - not posting anything about them anyway.

I'm a heavy user of privacy extensions. Currently using uBlock Origin + uMatrix. Most sites I run can't store anything (like cookies) or has access to scripting. I'm having an unique e-mail address for every service I register to (thanks to catch-all on own domain).

The above are available to everyone.

Additionally, I'm manually fixing broken websites, that can work without JS but refuses to - this however requires some skills.

When I really want to see the website that refuses to work with all these protections - I'm opening it in incognito session.

Result: On my accounts (both fake and real) these Facebook advertisers lists are empty. The only list that's not empty is advertisers I decided I don't want to see ads from.

Of course I know this doesn't mean I'm anonymous and nobody knows anything about me. A lot of services know. These protections listed above aren't the silver bullet. They can still track me, they just won't tell me everything. Most of my data are stored with Google. I let them store my location history (they probably would do it anyway). They know all my searches. They know my entertaiment interests (Youtube history, likes, dislikes).

Well... You can track me too. I'm not using unique nicknames usually. You can find my contact info on the internet, on my website and other places. You can find my home address. You can know where I work.

But these were my choice. But I still have to remember I can't take that back, "Internet won't forget".

So yeah. You have the control. Just stop giving yourself away, protect yourself from automation. Share yourself with people offline. That's all.


One thing I don't quite understand is why Facebook's data about me is my data? In my opinion, it's not. I've never put a single thing on Facebook with the expectation that I'd own it or control it.


You know that Facebook still collects information about you, even though you may not even have an account, right?

[1] http://fortune.com/2018/04/11/mark-zuckerberg-facebook-data-...

[2] https://newsroom.fb.com/news/2018/04/data-off-facebook/

[3] https://www.wsj.com/articles/you-give-apps-sensitive-persona...


I understand what you're saying, and you're not wrong. I just find the concept of data about me being "my data" to be strange. If someone takes a picture of me, it's their picture. If I use your website, and you collect information about me, it's your information. Maybe I just don't care what you know about me. I know the REALLY good stuff about me, and I ain't sharin' that with anyone!


It's not your data, but it's data about you and it better be under your control. Your data might be used to target your with information in order to change your political views, to increase your insurance pricing, to decline a loan, etc...

Why should it be "under my control?" This is not how I understand the world to work.

If I get caught, say, drinking and driving then I don't get to control if newspapers, courts, police, etc, share it with others. If I go to a party and take a shit on the floor I don't get to dictate what the other party attendants do with that information. I don't get to demand they don't take that information into account when deciding to invite me to another party.

Also, insurance companies and loan companies don't use Facebook data (nor are they legally allowed to).


Its because you don't use Facebook in order to communicate WITH facebook. Just like with the telephone company or an email provider they only have access to the communication by virtue of the fact that they have set themselves up as an intermediary.



I find it amazing that although my list of advertisers is super long, not a single company on there provides a service that's remotely relevant to me or sells something that I would ever buy.

In fact, I've never clicked on an ad on Facebook. Not because I hate ads (I'm ambivalent about them) but because I've never seen anything relevant.

For all of the hype around how persuasive Facebook advertising is supposed to be, to me their ads and personalization are no better than random noise.


I found some interesting points in the list. Bear in mind these are advertisers who got your info outside of Facebook. So while car dealerships are fairly prevalent across the board, I found most of them in my list were dealerships from the brand of car I own. Clearly somewhere along the line of purchasing my car, my email address got associated with "people who buy that kind of car", though I found it odd so many dealerships not in my local area were on the list.


The usage of Facebook is free. You uploaded your data voluntarily.

How do people think Facebook pays for running its service?!? Of course you are the product.


That's absolutely not true.

Sometimes some of my information is gathered without my consent or even without any warning, such as when I use a website with a FB tracker, or when third parties upload some of my personal information to Facebook.

Edit: Oh, and just because it might be buried in their TOS doesn't make it right.


Folks, those evil trackers set and collect cookies.

Act accordingly.


But Facebook also tracks people who do not have accounts, and people who pay for services. No way out.


This isn't entirely true. There is a way out. Media outlets who publish things like "Facebook Showed Me My Data Is Everywhere And I Have Absolutely No Control Over It" could also remove the Facebook scripts and widgets from their sites and write articles to the effect that it's a good decision. The way out of it is to stop with trying to shock people with the fact that they're being tracked and start taking proactive measures to prevent it in the future.

There's some logical fallacy at play when people truly believe that Facebook is the sole perpetrator of this issue.


The logical fallacy is your own, nobody in the article, or the comments believes Facebook is the only party doing this.


No you misread.

> that Facebook is the sole perpetrator of this issue

Perpetrator. I'm saying we should hold the organizations publishing articles about this kind of stuff accountable as well. They use Facebook tools (like buttons, share buttons, login integration, etc) and encourage use of Facebook to interact with their articles. This is how Facebook sips up and tracks a good portion of the web. The orgs publishing these articles are also perpetrators.

> The logical fallacy is your own, nobody in the article, or the comments believes Facebook is the only party doing this.

I'm not proposing there's a narrative convincing society that Facebook is the ONLY organization tracking us, that's just silly. I'm saying there's a narrative that is similar to "tracking people is so shocking we just don't know what to do!"

Today it's Facebook, tomorrow the article will be about Google, Microsoft, or maybe Amazon. Can you believe no one's going to budge a muscle about it?


I'm curious if I delete (I mean real delete, not disabling) my FB account, do they keep it as a ghost profile of me i.e. Facebook still knows that I exist as a person in the world? I've been thinking about just getting rid of it, not that I use it a lot, but still... One less thing to worry about and less clutter in my life.

The downside to this is that FB is my only channel of communication with some people, but then again if they truly want to contact me they can find a way to do so, although that's going to increase their inconvenience and they might decide against contacting me at all haha.


They probably don't delete your profile, but keep it with a "deleted" flag. I'd still recommend getting rid of it. It reduces the noise in your life, and while you will lose touch with a few people, most of the ones who matter will email you.


> They probably don't delete your profile, but keep it with a "deleted" flag.

I don't like Facebook, but I'm not so sure about that. I read one article a month or so back where someone paid attention to the targeted ads they were shown after they'd deleted their facebook account. After a few weeks, the targeting got much worse.


Mildly ironic since this site uses tracking ads that were blinking in and out of existence all over the page as I was scrolling down the article. I had to stop reading because it was so annoying.


Does anyone know if there is equivalent transparency for my Google account?


There's a section where you can download all the data Google has on you.



On that Advertisers settings page, it is really lame you cannot click through the list of advertisers in one fell swoop, disabling ads from all of them at once. There is no "select all."

No, you have to click the X in the corner of each one, then click View More after every 12 you have disabled. I got through about 140 before giving up.

I think this was intentional.


I wonder how it would be technically feasible to instead of copying data to other parties to just grant them access for a limited time and being able to revoke that access. This would be perfect for a lot of data sharing but I guess there is no way to prevent copying of digital data once somebody can read it.


If your data (or at least data about you) is everywhere, there's only one thing left to do. You'll have to engage in a misinformation campaign about yourself. Send advertisers so much conflicting information about yourself that they can't distinguish the signal from the noise. That way they won't be able to target you.

This may sound ludicrous, but there actually is a browser plugin out there (I don't have a link handy, unfortunately) that confounds meaningful tracking efforts by clicking on each and every ad on every page you view. The pages the ads link to aren't actually displayed to you.

Like I said, it's a crazy tactic but it makes perfect sense. Overwhelm them with data.

Google strongly opposes it, by the way. So it must be on the right track.


I think it can be gone quite a lot more simply: Register on each service with a different first and last name and birthdate, linked to a throwaway email address with its own different name and birthdate. Then register on another service with another, and so on. I'm a white American guy, yet because I've watched some Spanish pop videos and used Google Translate for Spanish translations (and done other things, not as part of a concerted mis/disinformation campaign), my Android phone believes I'm a Latin woman, and I even get a mixture of Spanish and English ads and pre-roll commercials on YouTube videos, and some of the ads are for women's clothing and makeup.

Because it's not your data. Just because something describes you, doesn't make it yours.


"Yours" and "not yours" is a really primitive way to look at the problem. Every camera that you have passed in front of has your image, and we more or less accept that if it's in a public place that's OK. But that doesn't mean someone can take that image and put it in a national television campaign for their product. There are shades of ownership.


I agree, ownership is a poor way to look at it. In fact, there is a rich body of property and tort law that address this whole subject in vast detail and has identified a large set of principles and criteria that can and is applied.

What I never understood, when I still used FB, they knew everything about me and yet I have not once been presented by ads I was even mildly interested in while on Google it is always spot on... Not sure how they can be so bad to get 0 clicks from me while having all that info. It is like this person: getting car and mortgage ads while fb knows i could not care less about either. Getting rich quick packages, again I do not care about as far as they can see from comments etc. Flights to Thailand for next week while they know I am currently in Thailand. It is the worst targetting I have seen: not unlike the ads of the late 90s.

If that headline accurately describes the situation, in what sense of the words is it "my data" instead of "data about me?"


Data retention and sharing should be illegal because it’s like a dam bursting when any actor, anywhere, lets your data out.

You could literally do everything right for decades as far as safeguarding your data, and then one slip-up could leak it ALL and let services connect you to ALL the other information about you.


Then leave Facebook?

Just like your credit report and score, and that's probably more meaningful.


At what point does the end user feel accountability for giving a free service all of their information? I get that they employ various unsavory tactics (especially on mobile), but remember that this _is_ the Internet after all.


reminds me of this fascinating video about how buzzfeed is playing the system perfectly:

Capitalism, Cultural Disintegration, and Buzzfeed https://www.youtube.com/watch?v=9srhgHzUFd4


The garbage site this content is on is made by idiots. First I get one pop-up, and as I'm about to cross it out another pop-up appears on top of it with the accept button at the same position so that I accidentally press accept.

Stop with the fucking pop-ups if you don't know how to do it!


This is why many of us use ad blockers.


You work for an online magazine and you weren't aware of this? If you're not the customer you are the product.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: