Hacker News new | comments | show | ask | jobs | submit login
Facebook's seized files published by MPs (bbc.co.uk)
738 points by AndrewDucker 9 days ago | hide | past | web | favorite | 477 comments





Here is a direct link to the files themselves:

https://www.parliament.uk/documents/commons-committees/cultu...


Really interesting stuff in there.

> Facebook email 24 January 2013

> Justin Osofsky – ‘Twitter launched Vine today which lets you shoot multiple short video segments to make one single, 6-second video. As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision.

> MZ – ‘Yup, go for it.’


What's really striking is how user-hostile this conversation is. Forget about whether the user wants to share their data or not - it's all about what Facebook wants to do. In this case, that's snuffing competition by denying access, in the case of Cambridge Analytica, it's sharing data for purposes of shady data "research".

Yeah, that seemed unexpectedly flippant/dismissive. But a couple things:

1. If you look at the docs, that's from Exhibit 44 which indicates that it's actually an excerpt from a messenger discussion, not email 2. Twitter had previously blocked both Instagram and Tumblr in the same way 3. Facebook had previously blocked Twitter in the same way 4. In some of the other docs here, you can see that there was much more discussion about what their policy should be around reciprocity and apps competing with facebook's features 5. The first line of that indicates that there was likely discussion/planning about this before that conversation


There's nothing that Facebook did here than any other company, in tech or otherwise, wouldn't have done.

And for the record, Facebook did not "share data with Cambridge Analytica for shady research purposes". A rogue third party developer created one of those shitty quiz apps for Facebook, and then proceed to get users to signup for it; several million did, which allowed said developer to harvest data thanks to the very permissive APIs that Facebook provided at the time. He then proceeded to sell this data to Cambridge Analytica. Facebook has a responsibility in what happened there, but "Facebook sold data to Cambridge Analytica" is a widly misconstrued story.


> There's nothing that Facebook did here than any other company, in tech or otherwise, wouldn't have done.

This isn't true. Lots of companies wouldn't steal users' call logs - eg, Mozilla, Signal, and plenty of boring, normal ones who make TODO list apps or whatever.

It also isn't relevant. See how that argument flies in criminal court. "Anybody else would have stolen that car."

What we see here (again) is that FB does nasty things and it's in the public interest to stop them - along with "any other company" who does the same things.


Facebook's business is car stealing. They tell you that upfront: we want your car, and if you park it in our garage we're going to take it.

All this anger over Facebook is ridiculous.[1] Now, if you want to talk about Android and Google's decision to make it more difficult to not only control but to know what data apps (especially theirs!) will take, that's a different matter....

[1] Especially from geeks, and particularly geeks from the 1990s and earlier when we were told that unless we promoted non-centralized publication models that we'd see the very constellation of centralized, user-antagonistic, profiteering services we now have. Whenever someone says, "but why would you want to host your own e-mail/web/chat server, my head wants to explode." It's always "why would you" (or "why would you, it'll never be as good as GMail/Facebook/Twitter/etc"; never, "maybe I should promote and help work on projects that make it easier".


"A rogue third party developer"

We are all rogue third party developers, and clearly, the priority, much like most businesses is to make a profit, and the customers/ethics come second.

I love the fact that you get bent out of shape that Facebook didn't sell it though, it's a theme I've seen with Facebook employees: "But we didn't sell it!"

I'm not sure if they are lamenting the fact they didn't sell it but I sure as hell can tell what they give a damn about. If you want to hide under "anybody would have done it", lets take a trip down histories gravestones and figure out whether or not we should bother trying to do the right thing because it is the right thing to do.


> There's nothing that Facebook did here than any other company, in tech or otherwise, wouldn't have done.

Dubious, but still good point. Meaning legislation to force companies to behave slightly less unethically is all the more direly needed.


If exfiltration of user information and data was not the explicit purpose of FB's API policies, they soundly rejected the principle of lead privilege, which dates back 45 years and is no doubt incorporated into FB's own systems.

thanks to the very permissive APIs that Facebook provided

Why did they do this?

https://en.wikipedia.org/wiki/Principle_of_least_privilege


Facebook improved this years ago and you can see the discussion surrounding this change in the released emails. These days a Facebook app can't ask for your entire friends list, instead, it only gets to see your friends that have also authorized that app. Also, user IDs now have a per-app namespace so they can't be (easily) correlated between different apps.

The discussion revealed in this release is pretty fascinating. For example, you can see that at some point Zuck's friends authorized 31 apps and 76% of those apps had "read_stream" access giving access to their entire newsfeed.

Through one lens this is Facebook locking down their API in an anti-competitive way, which is somewhat true, but mostly this feels like an API change making privacy improvements for users. (The Cambridge Analytica data came from an older app that was running before these changes were made...)


Facebook improved this years ago and you can see the discussion surrounding this change in the released emails...

This is the same elision they use. My question was, in the face of almost two generations of awareness of the principle of least privilege (almost typed "lead" again!), why did they design the API so that it gave away so much information and data in the first place?

Through one lens this is Facebook locking down their API in an anti-competitive way, which is somewhat true, but mostly this feels like an API change making privacy improvements for users. (The Cambridge Analytica data came from an older app that was running before these changes were made...)

https://newsroom.fb.com/news/2018/12/response-to-six4three-d...

Read the "Whitelisting" section. The only change they mention is turning off the ability to request permission to access the now-problematic data and information (let's say "D&I"). Of course, we also know that this is selectively applied. That's not "somewhat" anticompetitive, it's not necessarily different that the CA problem, and at any rate is only a marginal privacy improvement for users because there's (my estimate) no way in hell they're going to tell us who still has access to the APIs.


Can't they trivially use your first name and/or email to correlate across apps? I'm pretty sure those are all part of the lowest permission class.

I don't think the Facebook API gives you access to your friends emails...but agreed there are still ways to correlate this. (hash of profile photos for example?)

The "permissive api" is facebook changing what words mean over time. People signed up, shared things, and then facebook changed default behaviors without communicating the change WELL.

http://mattmckeon.com/facebook-privacy/

In April 2010 basically everything a new user posted to facebook was public by default. They didnt "care deeply about peoples privacy."


Facebook provided the means for Cambridge Analytica to occur. Facebook also shared social graph information with Obama's campaign.

Facebook is far from blameless, and should be held to account for its scummy behaviour, and for enabling scummy behaviour.


That's not true man. Some companies were "allowed" to use/get the data even after it was shutdown and the API was created in the first place to entice the masses.

It's worth remembering that Facebook did share data in violation of their own terms of use with the Clinton came - in fact, that policy came about because Obama "abused" Facebook to collect contact information for friends of people who liked or followed his campaign. Despite this policy change, Facebook allowed Clinton to do the same. They claimed it was by mistake, but even after the mistake was revealed, they didn't change it or cut off Clinton. Clinton's campaign manager speculated it was because "they agreed with us", but also thought that the Trump campaign had similar access (so far, no evidence has emerged to that).

Facebook is not a good actor, anyway you look at it. They are selling data to first or second parties, who are using it to damage our country.


Maybe the problem is profit-driven companies...

Yeah - if everyone was assigned a job by the government we wouldn’t have this problem.

Yeah, so much for openness and connecting people.

Funny that the committee ended up being the open ones.


If you own a grocery store and there's a guy on the other side of town who is cheaper, the people who come to your store because it's more convenient would love for you to be forced to just give away half your space to your competitor. Doesn't mean it's "user hostile" to refuse to do so.

I agree; of course that isn’t.

But being forced to give away half your physical retail space is hardly the same thing as just letting them keep using an API that you provide explicitly for such use.

Also, more broadly: one would have quite a hard time making the case that Facebook isn’t nakedly, gleefully, and rapaciously user-hostile.


I don't agree, but it's probably not worth making the case. Facebook has billions of users. I assume you think that they want to leave, but they "can't", or that they just don't know how hostile Facebook is towards them.

I think a lot of people who hate Facebook just have a hard time believing that most people just don't care about the same things you do, or to the same degree. They're still on Facebook and Instagram and Whatsapp because they see the world differently from you.


> But being forced to give away half your physical retail space is hardly the same thing as just letting them keep using an API that you provide explicitly for such use.

In this case, Facebook was deprecating the API and declined to provide special whitelist access to a competitor.


So they just heard about Vine, and decided to deprecate the API the same day? That doesn't sound right to me. That conversation seems to indicate they just wanted to block them ASAP (same day), nothing to do with deprecation?

> So they just heard about Vine, and decided to deprecate the API the same day?

No? Where are you getting this read from? The documents clearly show them discussing it from a year prior to shutting down Vine's API access, and planning on announcing it publicly ~6 months prior.

I can't find anything from a quick google search on when the API deprecation actually took effect, but assuming the timeline from Exhibit 43 is accurate, Twitter actually had whitelisted access for over 3 months before being shut down.


Nothing says evil more than preparing reactive PR to bury your competitors. And the nonchalant way his response sends chills down my spine. These people will suffocate innovation just to win.

CEOs and executives are the closest equivalent of royalty in the United States. Their media coverage is often hagiographic as a result. They are humanized and puffed up in the press to an extent that foreign press would never think to do about business leaders in their own countries.

Inch upon inch of columns are dedicated to their morning habits, favourite TV shows and fashion choices, and other fluff content to make them "relatable" to the average joe/jane. This is especially magnified when it comes to SV execs because they wear hoodies and tshirts instead of bespoke suits.

And that's what leads to reactions like "I can't believe he'd be so callous to users", as if the person in question is a hard working bootstrapper and not a billionaire looking to maximize market share and profit.


Zuck has a bespoke hoodie, and ordered a pallet of them for the company so his employees could dress like him.

Gross

As the news coverage of the time pointed out, Facebook did this to Twitter a few months after Twitter themselves did the same thing to Instagram (which was already owned by Facebook at that point) and Tumblr: https://www.theverge.com/2013/1/24/3913082/facebook-has-appa... All of the big social networks were and still are like this.

> And the nonchalant way his response sends chills down my spine.

CEOs of massive companies don't have time to write long and explanatory emails. They put people in charge that they trust, so they can just say one word or sentence and know that it'll get handled.


Sometimes I write some half ass objection before green lighting it just in case the convo gets leaked

idgaf though


> These people will suffocate innovation just to win.

I don't like Zuck, but come on, you just described every CEO in America, that when they have a choice they will do this.


Yes. Which is bad!

Very definitely this is bad. But it's not a surprise.

It shouldn't have to come as a surprise in order to be against it, that's how we get complacent and accepting of this behavior.

Nobody said it is.

Isn’t that like saying one must be a foolish consumer because they live in a western nation?

I don't agree that Elon Musk is like this (quite the reverse) and I'm sure there are many other CEOs who aren't too.

Let's not normalise sociopathology, even given its prevalence amongst business executives.


I'm no Musk fan but he opened up all the Tesla patents so other manufacturers can use their intellectual property.

Maybe he had ulterior motives... Don't know. But opening up patents is certainly the opposite of squashing innovation.


Yes, that was my point. That Musk actively encourages other companies to share in his companies' innovations.

I am definitely no fan of Zuck but on the subject of Elon Musk, this is the same guy who tried to use his high media profile to call an innocent man a pedophile just because he would follow Musk’s crazy plan.[1]

So that another one off the “CEO billionaire but not a sociopath” list.

[1] https://www.theguardian.com/technology/2018/jul/15/elon-musk...


I'd suggest you find a better source than The Guardian for news about Elon Musk. They have run a relentless smear campaign against him for years now. Just one more reason to loathe that publication, in my book.

Can I read about this somewhere? Did the Observer write about it?

Yeah it was really widely reported, not just in the Guardian if you don't like that paper.

Here's 4 other sources. There are many more on DuckDuckGo News.

https://eu.usatoday.com/story/news/world/2018/07/18/elon-mus...

https://www.bbc.co.uk/news/world-asia-44870303

https://www.nbcnews.com/news/world/elon-musk-apologizes-blam...

https://www.cnbc.com/2018/07/18/elon-musk-apologizes-to-brit...

From CNBC:

> The apology comes after the spelunker, Vern Unsworth, who was involved in the early days of efforts to save the now-rescued boys’ soccer team, threatened legal action against the billionaire executive over the comment.

> Musk said on Twitter late Tuesday that he had made the claim out of “anger” because Unsworth had criticized his idea to rescue the boys with a “mini-submarine” made out of a SpaceX rocket part.


Sorry, I meant the smear campaign. It doesn't sound like the Guardian.

> Nothing says evil more than preparing reactive PR to bury your competitors.

Really?


Hyperbole. Facebook is a free website... A website.

Burying competitors is a good thing. That’s the whole point. That’s literally the objective of every business in the entire economy.


The holocaust? That's nothing! Check out these guys, they're preparing PR to say bad things about their new competitors.

I have a hard time feeling bad for Twitter getting API access pulled out from under them. How many times have they done that to products/services that depended on Twitter APIs?

It's official, Mark Zuckerberg is the new Bill Gates. As in "we don't care about ethics"

This was known for a very long time. The "Dumb fucks" comments were brought to the public attention many years ago. The problem is that Silicon Valley gave Facebook a pass on all of the ethical transgressions for years (most likely since they minted many millionaires and billionaires in the valley)

I perceive the tech community to be okay with this as long as Facebook keeps giving us excellent open source tools.

Gave them a pass, yes, and also bought lots of FB stock, and shared in the spoils.

What is NUX? I was googling and closest I could find was New User Experience, is that right?

New User eXperience.

Thanks

Is that for real? I remember Mr. Zuckerberg said their mission is to connect the world together.

Can you help me understand how you make world smaller place and connecting people by "shut down their friends API access" ?


Connect the world together on Facebook.

Did you read the entirety of this report? Can you share any more interesting quotes/bits?

If you did read, would you suggest other HN users read it?


holy smokes...

>Michael LeBeau – ‘He guys, as you know all the growth team is planning on shipping a permissions update on Android at the end of this month. They are going to include the ‘read call log’ permission, which will trigger the Android permissions dialog on update, requiring users to accept the update. They will then provide an in app opt in NUX for a feature that lets you continuously upload your SMS and call log history to Facebook to be used for improving things like PYMK, coefficient calculation, feed ranking etc. This is a pretty high risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.’

>Yul Kwon - ‘The Growth team is now exploring a path where we only request Read Call Log permission, and hold off on requesting any other permissions for now. ‘Based on their initial testing, it seems this would allow us to upgrade users without subjecting them to an Android permissions dialog at all.

This is huge, doesn't this make google guilty as well?

>‘It would still be a breaking change, so users would have to click to upgrade, but no permissions dialog screen.

EDIT: formatting


Now remember that Facebook has made agreements with phone manufacturers to have fb installed by default and made un-uninstallable, with all the default permissions to share the users data whether they ever log in and use the app or not!

Sidenote: I've noticed via umatrix that Netflix on pc, during a show, is attempting to load fb js... Netflix wtf!


fb.js is Facebook’s standard JS base, with things like polyfills/ponyfills to ensure certain features in a browser environment. It’s imported by React, Relay, etc.

So this might be what you’re seeing, but normally it’s included in a precompiled JS application bundle.


I believe you're thinking of FBJS (https://github.com/facebook/fbjs), which is a library as you describe, whereas the comment above is referring to loading the Facebook SDK from Facebook's servers. Among other things, Netflix offers Facebook login, which would need the SDK loaded.

Don't they use React for their UI?

Surely they would serve it from their own CDN though?

Taking advantage of everyone having it already cached on their machine maybe? Or it could just be standard ad retargeting - not unreasonable that Netflix would want to stream behavior data to facebook ads for targeting / lookalike purposes

As of a few years ago, Android asks users to agree to categories of permissions rather than individual permissions, and adding permissions from the same category doesn't count as a new permissions grant. Based on that description it doesn't sound like Facebook was abusing this since they still required users to opt in. (Though I seem to recall from older discussions of this that in actual fact, the opt-in process they implemented was sleazy and high-pressure.)

If their application only needed to run on newer Android, I think they could rely on runtime permissions and not request this permission at all unless the user actually turns the feature on - but even now about a third of Android devices in use are on versions too old to support this.


> This is huge, doesn't this make google guilty as well?

I'm not sure I follow. An app can request permissions, and the user can allow or deny them. I don't understand how this puts guilt on Google. Can you elaborate?


this seems like a hole in their design, additional access is being granted without the user really knowing what is going on and they are deliberately keeping the user out of the loop.

at least, that is how I am interpreting it, it seems that the functionality of their software is not functioning in the 'spirit' of what it is suppose to be doing.


In essence, Android permissions system have (had?) a vulnerability that Facebook exploited, and Google is responsible to a small extent as the maintainer of the vulnerable software.

Google is very culpable because the various problems with Android's permission system were raised hundreds of times by security experts, both internal and external, and they didn't consider it a high priority to fix.

Even when they added a sane permission model in Android $VERSION, developers were allowed to bypass it for years by just building apps targeting Android $VERSION - 1 instead.

Google's web security may be the best in the world, but Android security is a disgrace and they should be called on it. (Fuschia may put them on top of the world if they ever switch Android to that, but we'll have to see whether that happens.)


> Facebook had been aware that an update to its Android app that let it collect records of users' calls and texts would be controversial. "To mitigate any bad PR, Facebook planned to make it as hard as possible for users to know that this was one of the underlying features," Mr Collins wrote

So did this change? I installed Messenger recently and this is pretty much the first thing it requests (no thanks). It also asks to let people search for you by number (no thanks) and to sync with your contacts (no thanks, smells like LinkedIn).

I have zero permissions enabled for Messenger, so I guess it would then ask before uploading my call logs?


> This is a pretty high risk thing to do from a PR perspective

What's "PR" here?


Public Relations - how they are perceived by the public. This was seen as a risky move because it had the potential (which Facebook realised and decided to press ahead with anyway) to anger a lot of people.

https://en.wikipedia.org/wiki/Public_relations


Public relations -- bad press. Here's some more context from the BBC article:

> Facebook had been aware that an update to its Android app that let it collect records of users' calls and texts would be controversial. "To mitigate any bad PR, Facebook planned to make it as hard as possible for users to know that this was one of the underlying features,"


Public Relations

Nice thanks!

I worked at Facebook for 5 years on Workplace and Internal Tools. I am typically very critical of the company nowadays, but even so find the discussion here difficult to digest.

People are complex. They are more complex than an action, or even a group of actions. To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them for the sake of making the world simpler. It is a poor model, and in a Dale-Carnegie-way it leads to poor outcomes, as you close the dialog with that person that allows you to change opinions and outcomes. It is the same with groups or companies: some parts of groups do good from some vantage point, some do bad.

I found myself inspired by a lot of what Facebook did. I loved working inside Infrastructure there, I was amazed by what people were innovating on every day. Projects like charitable causes have raised a lot for charity. I've seen the Are you Safe feature reduce so much stress during disasters. I keep meaningful dialogs going with friends I don't get to see often on FB and Instagram. It makes me really happy to see my friends thriving.

One of Napoleon's great gifts was in compartmentalizing pieces of his life. His tumultuous and frankly soul-crushing personal life (which affected him deeply) with Josephine never got in the way of his military victories. I wonder if that's a good model, up-to-a-point for people and groups. With people, by compartmentalizing some unsavory perspective someone has, you have the ability to change it later on through discussion.

... in groups: if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.


> People are complex. They are more complex than an action, or even a group of actions. To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them for the sake of making the world simpler.

Correct. But we're not talking about people here, we're talking about a corporate entity as a whole. It's a bit more complex than a person. It has an incentive model and a set of norms (culture) which enables people to do good/bad based on the situation to maximize their personal gains (whichever those are - personal, material, spiritual, you name it).

If this consistently enables people to do what is perceived externally as unethical, we have a problem.

Nice spin on it, but Facebook is still a bad actor (despite what you say). I held this opinion since the beginning, before all the leaks and all the scandals but nobody believed me. Very smart people were incentivized to join using the entourage (come work with other smart people) and money, then slightly brain-washed in a cult-like manner (us vs them, wartimes, etc..). This enabled them to ignore blatant unethical behavior. I've seen all of these first-hand and it's bad. Bad bad bad.


> Nice spin on it, but Facebook is still a bad actor (despite what you say).

This needs to be a conclusion, not a premise. I do not have a firm belief about whether or not this is true, but I keep seeing people list bad things about FB and draw a straight line to “thus it is evil”. Ideally, we would list the good/bad it does and assign weights to these points to determine if it is net harmful.

I want to be convinced, HN, but when I read comments like the ones in this thread (FB has no positive value whatsoever for its users, FB sells users’ data), it makes it hard for me to appraise.


At its foundation, Faceboot is a man-in-the-middle attack dressed up to attract users. The standard "web 2.0 defense" was that these companies would remain benevolent in the interests of their users to maintain their own profits. The onus was really on them to demonstrate that trusting third parties with our communications is a reasonable thing to do. The more time passes, the more blatant the evidence gets that this is not true.

> The onus was really on them to demonstrate that trusting third parties with our communications is a reasonable thing to do.

In general, I agree, but the context here is dozens of HN commenters claiming that FB is a bad actor/net negative for the world. These commenters cite lists of gripes about the company, many legitimate, some illegitimate, but get hand-wavy when someone asks about the "net" in "net negative".

I'm not rejecting the conclusion, here, by the way! If the company is ultimately bad for society, this line of questioning should embolden the consensus HN opinion.


The metric of "net (negative)" is nonsensical on a multidimensional question, due to varying utility functions.

Faceboot directly optimizes to increase time spent on their site, wasting human potential.

Faceboot also provides an effortless way to check in with loved ones after a disaster, easing human suffering.

Two people will look at both of these facts, value each one differently, and come up with a different "net".

Furthermore, it is not appropriate to give Faceboot all the credit for either facet! People could check in with text messages, and people would be in dopamine loops even with software purely under their control.

So the only thing we can really do is discuss each facet independently, in the context of first principles / morals.

I personally think much of what is wrong with Faceboot is due to the conflict of interest from a third party mediating social relationships, and the inhuman scale of centralization. But the more that comes out about inherent social media narcissism, I also see that there is no silver bullet.


I agree with much of what you wrote, but disagree that the question is nonsense. From a specific utilitarian perspective, "wasted human potential" only matters in so much as it impacts human suffering. If we want to maximize pleasure and minimize suffering, the units are still the same in both opposing points you mentioned - it is just really, really hard to measure.

In practical terms, coming up with qualitative points and weighting them in a good-faith-but-ultimately-arbitrary-manner is good enough, and something that everyone does constantly. For example, I suspect the average HNer weights "wasted human potential from FB" more than "benefit of disaster reporting from FB" (which I also agree with - however "benefit of disaster reporting" almost certainly outweighs "detriment from nebulous analytics firm scandal").


> If we want to maximize pleasure and minimize suffering, the units are still the same in both opposing points you mentioned - it is just really, really hard to measure.

It's not hard - it's impossible. Even we fully agree on a specific metric, a situation still has to be weighed against possible alternatives and integrated on timescales longer than our lives. A very basic example of this is a company optimizing for "profits", but they're actually optimizing for short term profits at the expense of going out of business two years later. The future is unknowable in the same exact sense that every NP-hard problem is. Heuristics are the only way to tackle this.

> coming up with qualitative points and weighting them in a good-faith-but-ultimately-arbitrary-manner is good enough, and something that everyone does constantly

Of course, which is why I'm referring to "Faceboot" even while recognizing that people get utility out of it. I just think that focusing on arbitrary pairs of specific facets is already headed down the path of madness, which is why I stated my initial critique in terms of basic principles.


> Heuristics are the only way to tackle this.

Yes. This is why every single comment of mine in this thread recommends a specific heuristic with which to tackle this exact problem.


The heuristic is the sum of what individuals are willing to put up with or not -- not something you can calculate and then just prescribe for everybody.

> net negative

How do you weight the individual criticisms, or individual benefits to achieve an objective positive/negative score? How do you weigh "installing spyware to see what companies we might buy" against "share photos with nan"? How does the Facebook "only visible to people you interact with enough" algorithm affect the previous question's balance?

Without the weighting I think we've easily reached the point where the number of negative revelations is incessant and the benefits declining (thanks in part to that previously mentioned feed algorithm).


I think that may be the the point, to get too many people from acting on what they have enough of, to "let's talk some more first".

Such discussion wasn't required of anyone before they were allowed to express support or praise, so why would it be required from anyone who reached the point of boycott or criticism, before they are granted that they have reached that point?

I mean, it's fine to ask people why they think what they think of Facebook, but why refuse to accept others are moving for good reasons, maybe for better reasons than others have for not moving -- and ask them questions while letting them pass, without attempting to delegitimize them until they "explained themselves"?


> This needs to be a conclusion, not a premise.

What makes you think it's a premise and not a conclusion?

> Ideally, we would list the good/bad it does and assign weights to these points to determine if it is net harmful.

I have 11 years of data points and experiences about FB, I'm not going to enumerate all of them whenever there is another. I'll just say "typically fucking Facebook".

At this point, I don't even feel obligated to remember it all -- I can trust myself enough. We do that with "evil" people in our lives, too. We don't remember every dirty detail. We remember that there were a bunch of things, and that overall, we had it at some point. I save the conclusion and the checksums and that's enough.

If you think I'm operating on a premise, instead of having come to a conclusion, how is that not you operating on a premise?

> I want to be convinced

Maybe, maybe not. What you are doing is delegitimizing even the conclusions others arrived at, by simply calling all of that mere premises. You saw a bunch of posts that struck you as knee-jerk, so all of it is knee-jerk.

You have to form your own opinion either way, that burden is not on others. Do you also expect anyone who says anything positive to give some kind of thorough, 1000-page assessment of all the benefits and cons? No, of course not. Same goes for criticism.

I for one don't care about the "evilness" of people I never met. For me the harm done through ignorance or fear or "evil" (which is just another form of weakness really) or not caring enough is the same harm.


> At this point, I don't even feel obligated to remember it all -- I can trust myself enough. We do that with "evil" people in our lives, too. We don't remember every dirty detail. We remember that there were a bunch of things, and that overall, we had it at some point. I save the conclusion and the checksums and that's enough.

> If you think I'm operating on a premise, instead of having come to a conclusion, how is that not you operating on a premise?

A mental conclusion can be a discussion premise. It doesn't invalidate your conclusion to say it should be a conclusion not a premise, because you're asserting a premise in a discussion which you are not (yet) supporting.

Also consider that you have seen eleven years of data points and experiences from the point of view of a small subset of users; there could perhaps be an equivalent cache of positive datapoints which tend to be significantly less interesting to report on.

Thus, supporting your point with concrete examples is how you contribute to a discussion, because then you and any adversaries can challenge you on the merits of your argument.

That's what the conclusion/premise separation is about.


> Also consider that you have seen eleven years of data points and experiences from the point of view of a small subset of users; there could perhaps be an equivalent cache of positive datapoints which tend to be significantly less interesting to report on.

You know what a thief can be like? 99.99% of the time, they don't steal. They sleep, they brush their teeth, they do all sorts of stuff, and every 2 weeks they take all the savings from an elderly woman.

How often you do need to see someone doing that to consider them a thief? Would you really care about any positive stories after seeing what you saw?

> That's what the conclusion/premise separation is about.

You can't speak for that other person. Let them respond for themselves.

koko775 9 days ago [flagged]

> You can't speak for that other person. Let them respond for themselves.

Sounds like you're more interested in competing with someone than talking about ideas.

> Would you really care about any positive stories after seeing what you saw?

...yes? Of course? I don't automatically dehumanize that hypothetical person for their deeds, whether I approve or not, or believe there should be consequences. Like, doesn't Facebook collaborate with law enforcement in tracking down predators and scammers and the like? It's not as simple as "bad. go away."

You should remember enough to make a proper argument, dude. A solid conclusion needs solid support.


> Sounds like you're more interested in competing with someone than talking about ideas.

No, I want to talk about the idea they expressed, not what you read into it. I can only do that with them.

> ...yes? Of course? I don't automatically dehumanize

Who's talking about dehumanizing? How is considering someone a thief dehumanizing?

edit2: Facebook is a company. It can't be dehumanized, it's not a person in the first place. People in it are responsible for what they do. Someone who fought shitty decision and then left is different than someone who, say, hires a firm to smear critics. That goes without saying as far as I'm concerned. But my thief example refers to Facebook, you see? Just because apparently my argument isn't easy to follow for everyone, doesn't mean it doesn't stand.

So, where is the dehumanization? Who is being dehumanized when someone comes to the conclusion that FB is on the whole "bad"? Because we're not appreciating all the good, supposedly? When someone is a thief, or a murderer, or a company is, then all their fantastic properties they may have is interesting for their personal friends. But not to the police, judges, or wider society. They know that the person has probably a lot of reasons for how they became that way, and nice sides to them, but they already have their own friends, it's simply completely out of scope of the subject at hand, unless it's directly related to the "crime".

> Like, doesn't Facebook collaborate with law enforcement in tracking down predators and scammers and the like?

Yes, and that thief who sometimes robs elderly women who then freeze to death outside, also has child, and he's very great with that child, and he's singing in a choir, and all sorts of great things. But you don't judge a meal by the freshest ingredients, but by the most spoiled. You judge a person by their worst deeds, and likewise a company. Again, we're talking about judgement with a capital justice here, not being friends, thinking we're better, or thinking they're evil and we're good, or any of that.

> You should remember enough to make a proper argument, dude.

I think my argument is just fine, and it even seems to get to you a little.

edit: And what post of mine are you even referring to? Where did I make an argument without examples? I was responding to someone else complaining that everyone who thinks Facebook is "evil" (let's just say bad) is operating on a premise. I was responding to that general point, I'm not decebalus1, who in turn didn't have "Facebook is evil" as their main point either.

Their main point, if you would follow the guidelines, hasn't been addressed by anyone. Their main point is the first two paragraphs, the rest is bonus. How come you are trying to teach how to "make a proper argument, dude", but didn't notice that?

Oh, and clicking buttons instead of reasoning kinda gives away who is interested in discussion, and who is interested in dehumanization and censorship.


I'll list a few:

Conducting psychology experiments on people without their consent (or awareness of it)

Documented evidence that Facebook as it is now decreases people's quality of life, but they don't want to mess with their formula because that's what brings the dollars in.

Theres many more. But these two are the biggest I can think of off the top my head.


There are certain forums in which the groupthink makes discussion of certain topics in good faith impossible.. sadly, HN is one of those when it comes to Facebook being anything other than cartoonishly evil.

> I wonder if that's a good model, up-to-a-point for people and groups. With people, by compartmentalizing some unsavory perspective someone has, you have the ability to change it later on through discussion.

Compartmentalization is part of the problem, not the solution. It's why some people can exploit and hurt a hundred thousand other human beings in the morning, enjoy their lunch break, fuck up the environment a bit more in the afternoon, and come home to a happy evening with their spouse. We shouldn't be encouraging more people to separate their private happiness from their interactions with society.

> ... in groups: if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.

The hope is that if enough people leave, the organization won't be able to keep functioning. Or, if it survives and becomes an evil-people-filled cesspool, it'll be easier to direct regulatory actions against it and just shut it down.

Also, Napoleon was famous, but not exactly a paragon of morality.


"Compartmentalization is part of the problem, not the solution."

This, this and this again. We as people are absolutely allowed to be complex, contradicting beings, but it is because of this richness of breadth and depth in our characteristics and beliefs that we ought to consider deeply the consequences of our actions.

Compartmentalization acts effectively in the opposite direction of that.


Like notacoward's post below, this one misses the point. The shaming is not about how good or bad or complex you are.

Some people have determined that Facebook is an organization that's producing some bad results. Socially shaming employees, removing the status they might have hoped to acquire and spiking attrition are tools being deployed to change Facebook's behavior and the behavior of Facebook's competitors.

You might claim that this pressure will never work, or that it will cause Facebook to shut down the "Are You Safe" feature, or gut the charity tools, or avoid developing these kinds of projects in the future. But we should avoid hand-wavy claims that Facebook and the people who work there are just too complicated to influence.


> you close the dialog with that person that allows you to change opinions and outcomes.

So if we're nicer to facebook, maybe it'll decide to not spy on people quite so much, undo the engineered-for-addiction notifications and feed, and curb it's anti-competitive practices (but not give back any market advantage it has already gained through them, of course)? The executives will voluntarily decide to make the company earn less money, so they can be more ethical?

Is there any precedent for a large corporation acting in this way?


It's not that you're wrong. You're not. People are complicated, and everyone's the hero of their own story. There are plenty of people out there who give millions to charity but steal from the poor. As a general rule, though, when someone accuses you of being bad and your best argument against it comes down to "what is bad anyway?," you're probably on the wrong side.

Is Facebook universally bad? No. It's done lots of good things. And lots of good people work there. But Facebook is also doing lots of bad things. Your policy chief hired a PR firm to push antisemitic conspiracy theories about George Soros. "People are complicated" isn't an excuse.


While I agree that humans are complex and should not be generalized by one or a few actions, Facebook as a whole has repeatedly demonstrated they are not good stewards of the data entrusted to them by users. The actions of their leaders is in complete conflict with the messaging they portray.

From the ad campaign run by Facebook in 2018: "...From now on, Facebook will do more to keep you safe and protect your privacy, so we can all get back to what made Facebook good in the first place: friends. Because when this place does what it was built for, then we all get a little closer."

To anyone looking closely at the actions of executives and living outside the SV echo chamber, this statement is laughable.

While the actions of the many inside the company have been noble, they too have been taken advantage of just as those who used the platform for so many years.


> One of Napoleon's great gifts was in compartmentalizing pieces of his life. His tumultuous and frankly soul-crushing personal life (which affected him deeply) with Josephine never got in the way of his military victories. I wonder if that's a good model, up-to-a-point for people and groups. With people, by compartmentalizing some unsavory perspective someone has, you have the ability to change it later on through discussion.

Also, the Napoleonic Wars killed 3 to 6 million people.


Over humanizing people is also a bad way to look at these things as it diminishes their contribution towards a more problematic group and class structure. While yes, most billionares are probably fine people by many individual metrics, the billionaire class can only exist by extracting value off the backs of others.

Facebook as an entity might be run by great people who love their families and support their communities, but the entity has done tons of very sketchy things and anyone who willingly or knowingly takes a part within that, especially for personal benefit is responsible for the actions of the group as a whole.

Humanity might transcend class and social status, but it doesn't excuse anything.


> and anyone who willingly or knowingly takes a part within that, especially for personal benefit is responsible for the actions of the group as a whole.

And that is on top of "The Responsibility of Intellectuals"


> if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.

Is this not potentially an optimal outcome?

If companies who have a track record of doing unethical things suffer for it by being depleted of all employees who are not either unethical, incompetent, or both, this should dramatically compromise the effectiveness of the company. If this happens repeatably, it provides a disincentive for unethical corporate behavior.


The 76 year old woman I know who seems to be addicted to facebook and gets harrased by people reporting her posts and whatnot just wants to talk to people and share photos. The people and the photos and the emotions and the words people bring themselves -- Facebook just adds grief to that.

At best it's invisible, at worst it just kicks people out with no recourse, because some troll reported them, and because they're not a FB employee or a celebrity. These people are people, too. You barge into the public and drag it onto your platform, and then you don't just dehumanize people on it, you completely remove them. The rest either is friend or not friend, blocked or not blocked, posts are liked or not liked. Yes, it's a very poor model -- don't assume everybody else is using it, too.

> It makes me really happy to see my friends thriving.

That's the equivalent of "it's fun with friends" for game reviews. Every multiplayer game has that. Every site that allows sharing of photos and text has that. Because people have that.

> With people, by compartmentalizing some unsavory perspective someone has, you have the ability to change it later on through discussion.

When later? When was Facebook ever up for candid discussion? How come you blame it on those who, in absence of any say, ever, now think badly of Facebook?


> if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.

An interesting analogy... The 10th amendment to the Bill of Rights was intended on allowing each state to be diverse in its laws. If people in state x want the laws in state y to change, a constitutionalist would say that you should move to that state and work in changing the laws from the inside, via the 10th Amendment and the state's own Constitution. If you don't want to put in the work of changing the state from the inside, and would rather change the laws from the outside (i.e. via federal authority) then you're probably an authoritarian.

I guess the only point I'm trying to make, is yeah, it's hard. Thanks for your contribution.


> if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.

And then the organization becomes bad enough that it crumbles and dies.


> critical of the company nowadays

As opposed to when? When you worked there? When was this period that Facebook was a beacon of ethical behaviour?

> to explicitly dehumanize them for the sake of making the world simpler.

Like aliasing people into a point on a social graph and a wallet? Dehumanising behaviour and monetising that is the main ethical problem with Facebook. It's a bit rich to accuse opponents of over simplification.

>It is a poor model

FB market cap seems to disagree. Pidgeon-holing people for profit is an exceptionally lucrative model.

> Projects like charitable causes have raised a lot for charity

And the trains in Italy ran on time!

You worked there for five years and never had a clue. Maybe you're not the right person to provide a judgement on judgement.


So... we should be nice to Facebook, so as not to distract them during their advance on Prussia?

...and because treating them differently based on their actions may cause good people to apply their skills to endeavors other than growing Facebook’s wealth of private data?


I disagree with something you said:

>>To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them

The only reliable method for evaluating someone’s values is to assess the choices they make. What values do their actions suggest? Do these choices affect your ability to trust these people?

I fundamentally disagree that it dehumanizes to do this, and I also fundamentally disagree that anyone intends to dehumanize them with this approach. I find it troubling that anyone expresses such a black and white assessment of people trying to determine where others should fit in the herd.

Put another way:

the only way to judge someone’s ethics is by evaluating their action and comparing it against their words.


> if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.

This is just complete rubbish and it appears you are still rationalising the time you spent working for the company. If you are trying to achieve 'good' inside a company like Facebook then you are a complete sucker or, as Zuckerberg would say, just a 'dumb fuck'. You're being played. Facebook is not interested in doing good; that has been abundantly clear since almost day 1.


Do you own Facebook stock?

If what people are taking away from their time working for FB is, "you know what? Napoleon was pretty cool!" then that really buttons things up tidily, though probably not in the way you intend. Too bad he had the sads about his partner though, I guess.

To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them for the sake of making the world simpler

Baruch Spinoza more or less invented the concept of ethics in the 17th century, which (partially, and simplifying) is premised on the effect of "an action"[1] being good or bad. If this doesn't reach, let's go back 2,000 more years to Jainism, in which karma is comprised of particles in the universe that stick to a person through their actions (good/bad/undefined).

tl;dr: If your argument is that a person's actions should have no bearing on a sense of "good" or "bad," know that significant parts of the entirety of recorded history contradict you. [insert your own Godwin reference here]

Point being, calling someone "bad" or "good" only speaks to the preponderance of evidence one way or another, and it's absolutely within all of our rights to have an opinion on what FB/Zuck/SS have done with the aspects of our lives to which they have had access, down to discrete decisions. I mean, I don't think anybody would say that FB has obviated the concept of reputation, and having an effect on society doesn't only refer to the good. And really, I don't see anybody saying "Zuck = bad" nearly as much as I see "Zuck = good" and "Zuck has done bad things and made bad decisions." He's the face of the company, this is how society works when you're not moving fast and breaking things.

It's not about people being complex, it's about people shitting in the world's PII sink at the party. Specific people, with witnesses. These were actual choices made under consultation with many people, all of whom are extremely highly-compensated, to reduce the control you have over information about your actual life (what focus groups pay big money for). It's not outlandish to say that the ability to trade pictures with and keep up with people has not been a fair trade.

Forgive me if I discount the hosting of online charity facilities (and really: [3]), and the supplanting of phone calls and neighborly phone trees in times of disaster or crisis, but in the words of Nelson Muntz, "If you hadn't done it, some other loser would have. So quit milkin' it,"[4] and maybe that other loser would have protected their users more. (and please don't insult us with anything along the lines of "personal information doesn't have value until someone sells it")

To conclude with an even larger shadow, FB has made a shitton of money from these policies, money they use to hire people and pay them enough not to work at other companies that also pay a lot, and this money has been used to drive up housing prices and rent for their employees' convenience (nobody overbids for fun). Thus this infrastructure of data scams affects the ability of teachers to live near the schools in which they teach, police to live among those they patrol, and low wage workers to not have to commute 2+ hours each way.[5] I don't know if working at FB inures you to these examples, but it seems clear that FB leadership does not display the maturity that I personally would like to see in those who have the ability to make these decisions. Zuck didn't personally raise rents, but he built machinery that did so. But hey, it aiin't his fault his exploding knife-gun hurt someone, he painted it pink!

1. https://en.wikipedia.org/wiki/Conatus

2. https://en.wikipedia.org/wiki/Karma_in_Jainism

3. https://www.wired.com/story/nonprofits-facebook-get-hacked-n...

4. https://www.simpsonsarchive.com/episodes/3F22.html

5. https://www.curbed.com/2018/3/6/17082570/affordable-housing-...


I see Spinoza, I upvote :)

To give the benefit of the doubt I'm not sure if OP was objecting to the first step of placing a person or company along a good-bad continuum or the second step of reducing that continuum to a binary label.

In either case though, no amount of complexity makes it unreasonable to consider whether a company like Facebook is a net positive or negative force on society -- which is a binary decision that requires reduction.


I'd argue that reduction is required for qualitative decisionmaking and so is no barrier to legitimate opinion.

First, because we can never know everything that is going on outside of investigations like this one and public statements by FB.

Second, language is a lossy codec for thought, so the investigations and public statements by FB are themselves reductive. It's turtles all the way down.

This applies to both examples then: thinking FB is good or bad, and being required to eliminate reduction in order to make that decision.


Sounds fair enough to me

Not sure what point you're trying to make about Napoleon, he wasn't exactly someone to look up to. He was directly responsible for millions of deaths, basically a 19th century version of Stalin (and the other "great leaders" of the 20th century).

I would recommend educating yourself on a contemporary Napoleon biography (I really loved Napoleon: A Life) before making such a strong claim. Napoleon is a very complex figure and your kind of blanket statement here is the kind I was rebelling against in my original comment.

The issue here is that writing a good account on Napoleon is very difficult. His enemies disseminated nonsense about him, and he was very careful about his public image even as a very young man.

I could get into the history: but Napoleon was fighting feudal leaders who aimed to restore the monarchy in France. They declared war on him many more times than he on them. The Napoleonic Code is one of the most influential documents in history and through it and Napoleon's influence helped spread the ideals of the French Revolution throughout Europe. Of course there are many negative things to say about Napoleon, but he is a very interesting person who deserves a look.

https://www.smithsonianmag.com/history/we-better-off-napoleo...


>>Projects like charitable causes have raised a lot for charity. Probably would have been raised by a company FB bought or crushed. Or a company in another field.

>>I've seen the Are you Safe feature reduce so much stress during disasters.

Not exactly charity, it gets people to use FB more.

FB's problem (for the world) is that it is big and as such it can be used to manipulate opinions or mess with people's psychology. Either by others or by FB for $$$. Same would happen if Fox News or CNN were watched by a billion plus people. They would have the power to move certain people to a different direction. Google is another one, a little bias on search /news results and maybe millions of votes would move one way or another.


Sometimes, even amidst the Brexit madness, I love our Parliament. Publishing and seizing documents like this is a move that proves politicians have the spine to look after people's interests. Time to get serious, act on this, and break up Facebook forever.

This is our committee system working as intended. I wish it still applied to other areas of governance like local government funding, lobbying, MPs in government and so on.

Late Edit: I must add to that list admission of ministerial responsibility. The last resignation with honour, rather than for political point scoring, was Lord Carrington resigning as Foreign Secretary in 1982. Since then it's a dead concept.


Yeah, it is a joy to behold. I hope it becomes even more powerful than it is already.

Seizing documents for a valid inquiry is one thing.

But does that follow that publishing them is OK too?

From the outside, it looks like these politicians are frustrated that a foreign CEO ignored their demand to appear before them (because he has no legal obligation to do so), and have decided to retaliate by releasing embarrassing private internal documents obtained during an investigation in the hopes that Facebook will be politically and financially damaged.

I get that people hate Facebook, but does that justify any level of bad behavior as long as it harms them?


All Parliamentary committees publish online, whether evidence is written or oral.

You can ask for an exception for part of your evidence, if you fully explain why, which the committee considers. The usual reasons for discretion apply. It's almost unheard of for some evidence not to be published at all, though it has happened. 1980s I think was the last case.

No idea what dusty precedent or procedures apply when someone refuses to attend or documents are seized. That doesn't happen much.

Maybe no one asked. Maybe this is the redacted for sensitivity version as we have no idea the amount seized in the first place. I think we'd have to wait for the report to know.


Excellent, thank you for an actual response that isn’t “haha, screw Facebook, who cares!”

If this is just part of the normal investigative procedure rather than a gross abuse of authority, then so be it.


It's getting hilarious to see people more offended by FB's information being exfiltrated than the exfiltration of everybody's data that FB aided and promoted. Ah, but those are just your fellow people, not the money man.

The potential problem I have here is abuse of power by government.

I’d be pretty upset if Congress subpoenaed user data from Facebook and then selectively published embarrassing info on their political enemies. Even if I hated those people.


I’d be pretty upset if Congress subpoenaed user data from Facebook and then selectively published embarrassing info on their political enemies. Even if I hated those people.

I think it's more like this, which if you've been paying attention has been going on for a long time. If you want to bet that the US hasn't snatched data they want when they know someone is in the country even temporarily, I'll take that bet.

https://www.zdnet.com/article/warrantless-phone-laptop-searc...

Among people who have a problem with the Six4Three situation, I feel like the issue is really just that we found out about it. The powers are already there, waiting to be used.


The government is invading our privacy with surveillance and border searches, agreed. No defense or denial of that here.

But I don’t see stories where some congressperson is then publishing some of the fruits of that surveillance or searches just to embarrass political rivals. That would be especially beyond the pale.


Does everything have to be done by the American version of something before it's legitimate?

> I get that people hate Facebook, but does that justify any level of bad behavior as long as it harms them?

Personally, given the level of harm Facebook has helped to inflict on the world these last 2-3 years, yeah, I reckon I'm perfectly happy with people inflicting harm on the corporation, probably even to the point of ruination.


Seems like a pretty dangerous road to go down, personally.

Looking at some of our politicians in the US, I don't really want them to have unfettered power to misuse the law and their elected position to destroy any person or organization who raises their ire. Even if I don't like that person or organization and want to see them destroyed.


FWIW, the UK Parliament already has unfettered power , so we're already well down that road.

I believe Parliament has decided to limit its own powers - the Human Rights Act, membership of the EU, devolution of powers to Welsh and Scottish parliaments etc.

Of course these can be changed by Parliament... ;-)


I think this is probably something that needed to happen, for the furtherance of justice. What seems wrong to me is that politicians are essentially carrying out summary justice on particular people / a particular corporation, rather than fact finding to make informed decisions on legislation. If there's an argument that this publishing is necessary for an informed democracy, then I think that should be considered by a court.

That said, the UK has a very weak conceptualisation of separation of powers, thanks to parliamentary supremacy.


By doing business in the UK, Facebook has agreed to UK's Privacy Policy.

I don't think this move achieved much of anything, other than to encourage business executives to use ephemeral messaging for their confidential internal communications.

WikiLeaks or someone has said that part of their purpose is to make sure the fear of leaks cripples the internal processes of certain organisations.

If Facebook were forced to mostly use ephemeral communication that would be somewhat crippling.

Also I think there exist regulation as to what kind of documentation must be archived and how (SOX?).


Indeed, kudos to them having the gall to do such a thing. Heavily lobbied congressmen would never do such a thing or even implement something like GDPR here.

I think this sets an awful precedent and it's astonishing that an action like this would be applauded. But you do you.

And what precedent is that? The idea that companies aren't above the law? That's a precedent I wish was more enforced.

I'm proud of my parliament today (and it's not often I can say that), even if in other parts they're tearing themselves apart.


What laws did Facebook break that warranted seizing and publishing internal documents?

> What laws did Facebook break that warranted seizing and publishing internal documents?

Parliament is sovereign. Facebook ignored Parliament. That is tantamount to blowing off an American court order.

More pointedly, Facebook has broken their agreements on keeping WhatsApp and Facebook data separate. These e-mails further show Onavo and Facebook conspiring to hide their intent around data collection users, which likely breaks British privacy and honest trade law.


Parliament is not into enforcing laws - it writes them. Parliamentary sovereignty means it can publish internal documents[1] if they decide it's in the public interest, if not, the MP will be appropriately censured by their peers.

1. In the US, the president (executive) can declassify any classified information, the DOJ or judiciary may publicize (discovered) internal documents for trials/indictments before guilt is established (note: IANAL). In the UK, Parliament is supreme to the executive and judiciary.


In the US, Congress enjoys the Speech and Debate privilege, meaning pretty much the same. It's even in our (written, unlike the UK) Constitution.

It didn't break a law, but it also didn't comply with our sovereign parliament's right to investigate matters of interest to its members. In a manner of speaking in this sense, they are the law.

The law against not screwing your users and not being anti-competitive. The latter is actual law, the former is the kind of thing that leads to street justice.

I'm glad to have the documents, but I have to admit its really sketchy for a government to step in and do that. People are saying 'the sovereign body wanted it', but that's a worrying way to run things.

It set a pretty awful precedent when Facebook spread misinformation so effectively that my country voted to cripple itself. We are going to have justice: no foreign intervention or dark money will stop us. I rarely feel pride like I did today, because for all their faults it is clear to me that the current crop of Parliamentarians will hunt down those responsible for weaponising disinformation in our politics.

Here's Zuckerberg's response in which he "shares some more context around the decisions we made": https://www.facebook.com/zuck/posts/10105559172610321

Quick textual analysis: in a pithy 623-word statement, Zuck manages to mention "shady", "sketchy" or "abusive apps" no less than 7 times. 8 times, if you include the time he mentioned Cambridge Analytica without using a sketchy adjective.

Notice the spin as Facebook the White Knight protecting the public from the evils of sketchy apps. Unclear how this will play out given public losing trust in Facebook itself.

[Reference]

1. "some developers built shady apps that abused people's data"

2. "to prevent abusive apps"

3. "a lot of sketchy apps -- like the quiz app that sold data to Cambridge Analytica"

4. "Some of the developers whose sketchy apps were kicked off our platform sued us"

5. "we were focusing on preventing abusive apps"

6. "mentioned above that we blocked a lot of sketchy apps"

7. "We've focused on preventing abusive apps for years"

8. "this was the change required to prevent the situation with Cambridge Analytica"


can you attach the text for people that have black listed all FB IPs ?


No pull quotes from the documents that Parliament made public to illustrate the "context".

> Facebook used data provided by the Israeli analytics firm Onavo to determine which other mobile apps were being downloaded and used by the public. It then used this knowledge to decide which apps to acquire or otherwise treat as a threat

> there was evidence that Facebook's refusal to share data with some apps caused them to fail

Stuff like this should trigger the EU Commissioner for Competition to withdraw the authorization that “allowed” FB to acquire Whatsapp and should force a split between the two entities. A fine (no matter how big) will be seen by FB and its investors just as “cost of doing business”. Facebook in its current form needs to be split up back again.


I have no particular love for FB or what they've done with data, but these I don't understand why either of those points is particularly controversial or even unusual.

> Facebook used data provided by the Israeli analytics firm Onavo to determine which other mobile apps were being downloaded and used by the public. It then used this knowledge to decide which apps to acquire or otherwise treat as a threat

Checking out your competition is pretty standard among all businesses, as is buying out the ones you can't beat.

> there was evidence that Facebook's refusal to share data with some apps caused them to fail

Sharing or not sharing data with another app is likewise not an unusual decision. FB is not a public utility - they can not work with other apps for any reason, or no reason at all. And this is a particularly ironic thing to point out, considering that the main thrust is complaining that they _did_ share data with other apps. So it's bad if they do share data, but also bad if they don't?


> Sharing or not sharing data with another app is likewise not an unusual decision. FB is not a public utility - they can not work with other apps for any reason, or no reason at all. And this is a particularly ironic thing to point out, considering that the main thrust is complaining that they _did_ share data with other apps. So it's bad if they do share data, but also bad if they don't?

To me, this would be Facebook taking advantage of their stronghold on the market to continue to dominate the market.

Let's remember this is not Facebook's data, this is their users' data. It should be up to the User who they can share their data with, but they weren't given the choice to use their data with an app because Facebook decided they didn't like the way the other business competes with them in some aspect or how they aren't making money out of the third party app. An example is Facebook won't grant Influencer Marketing Companies access to their API even though the user (The Influencer) wants them to have the data so they can get paid based on the views, shares, comments, and likes of the posts they've been contracted for. User has a legit desire to share that data to the point where they will take screenshots, GDPR deletions, etc to fulfil the need for that data to be shared. Facebook's reason for not allowing them access is simple, they don't make any money out of the deal.

So if they refuse to allow you to share your data with people you want to share it with, that is bad. Sharing your data with companies you don't want to share with is also bad.


> Let's remember this is not Facebook's data, this is their users' data.

No, it's not - short of a change in law it is Facebook's data. Facebook collected, maintained , and analyzed the data. It's their data, the fact that their users allowed Facebook to collect it does not change that fact.

GDPR allows users to view the data Facebook collected about them. If users want manually request their data from Facebook and provide it to competing apps they can. But this is the user's prerogative, not Facebook's.


> If users want manually request their data from Facebook and provide it to competing apps they can.

Actually, for a lot of it they can't. The GDPR downloads are incomplete and are missing lots of data like many other GDPR downloads.

And in some cases these aren't competing, they just aren't making Facebook money so Facebook denies them access and tells them what they need to do to add Facebook to process so they can get access.

And if I can tell Facebook to delete something and they have to do it, they do not own it. Facebook are not in control of the data, they have access to it, but they can't share it without permission, they can't sell it without permission, they can't do alot of things. This, to me, means they do not own the data.


And the downloads for people outside GDPR's reach (like Americans) are even more incomplete.

Facebook continues to play a game of "how much can we mislead them to get our way".


The requirement to make GDPR downloads available was never well thought out. For instance, say you requested a log of your Facebook chat logs. Should Facebook provide this without redaction of the recipients' identities? I'd say no, providing this data without that redaction is a violation of the recipients' privacy. Especially for a service that is all about networking and sharing, it's difficult to walk the line between giving the user the data they requested, and respecting the privacy of other users.

The ability to tell Facebook to delete your data does not grant you ownership of that data. You do not have the right to tell Facebook to let some 3rd party to access it using Facebook's infrastructure (which seemed to be the claim made by the original comment I responded to). The fact that the government forces companies to abide by certain regulations does not mean that the company is not the owner of the things that are regulated. A company might have it's interview records audited to investigate discrimination. Does that mean the company does not actually own or possess these interview records?


>Should Facebook provide this without redaction of the recipients' identities? I'd say no, providing this data without that redaction is a violation of the recipients' privacy.

How is it a violation of privacy to tell a user with which other users they have communicated in the past?


Do you want your Facebook usage published because any one of the hundreds (thousands?) of the people you interacted with requested their data under GDPR?

It's not just telling the user. Once they info is published said user can turn around and tell it to the rest of the world. Some people have already published their requested data, it seems unlikely that people publishing said data face consequences.

Sure, nothing is stopping people from scrolling through chat logs screenshotting them. But that's difficult to do at scale, so there's de-facto limits on how much info can leak this way. Also consider the situation in which you unfriend someone, thus preventing them from seeing past chat logs - I think, I haven't used Facebook in a long time. If that person does a GDPR request should the company deliver data that the user is otherwise prohibited from viewing? If GDPR does mandate this it seems like a legally-mandated side channel attack.


>> this is their users' data

> No, it's not - short of a change in law it is Facebook's data.

Er, actually it is. Under GDPR, and under previous data protection rules that lacked meaningful enforcement, users own their data. Companies at most get to "borrow" and/or keep it safe for them.

Alas, this fundamental fact seems to not have quite made it everywhere yet..

https://digitalguardian.com/blog/tackling-gdpr-challenge-1-e...


What about data that Facebook generates about you, but you did not provide? Is that information required by the GDPR to be provided to you, or does the GDPR only state that any information you provided to them must be returned? There's no way I'm reading the GDPR nor would I even understand all of it if I did.

We know that FB makes connections between you and other users even if you've never made those connections yourself. We've heard about the shadow profiles, etc. These are all data points that FB generated on their own. It seems like this is the information that is valuable. Information gleaned from ML analysis of images posted would also be their own. That has way more value than the image itself would. FB is doing way more under the hood than just slurping in the data users posted, and then displaying it back to them.


It doesn't matter where the data came from - uploaded, downloaded, generated. If it is PI it is the property of the identified user, not Facebook's. If they slurp 100 images of my face then that data is mine and it is my right to have it deleted and told who they shared it with and for what.

Does anyone have an explanation as to why parent is downvoted? Is the information they presented incorrect?

Possible reason: It is actually intentionally avoided to use the concept of "ownership", since those advocating for "data ownership" generally mean things like "if people own their data, that means they can give it to companies and then that company can do what it wants with the data, since it is its property", which is more or less the opposite of what the goal of this privacy regulation is. The term is used sometimes in more general material, but it is problematic framing. People have rights to their data and how it is used, but they don't derive from ownership.

So in that particular sense, it is even stronger than ownership, at least in terms of preventing facebook from ever becoming the owner. (Not in the sense of having all of the rights usually associated with ownership, which include selling)

The article linked doesn't back up the claim that GDPR makes the user the owner of the data companies collect. Being entitled to obtaining a copy of a company's records about the user is a very different thing than making the user the owner of that data.

I'm not finding where in the GDPR text the user is stated to be the owner. The text frequently refers to the user as the "data subject". Ownership is not mentioned. The article you link to claims that the EU resident owns the data, but never backs up this claim with references to the GDPR text.

Seriously, it's mind-blowing to me that people can't make this distinction. I should be able to ask the library for a record of the books I've checked out. But that doesn't imply that they should hand those records out to whoever wants them, or that they need to make them available for anyone but me, even if I think it'd be more convenient for someone else to pick them up for me.

I agree with your point overall, but we need to recognize here that the problem isn't Facebook being selective, it's Facebook being a monopoly.

Selectively using business intelligence data is what makes a company a 'monopoly' these days?

I suggest re-reading your parent comment because that's not what was said at all.

Even if it was the User's data and not FB's (which is argued downthread a bit...) there's nothing that requires Facebook to make it accessible to other companies. There is real cost in creating an API, running it, managing it, etc.

One could also argue that the User already has all their own data, they just didn't collect it very well. So really they're trying to use FB as their data collection platform - which FB is free to do or not, as they choose.


Facebook won't grant Influencer Marketing Companies access to their API even though the user (The Influencer) wants them to have the data so they can get paid based on the views, shares, comments, and likes of the posts they've been contracted for. User has a legit desire to share that data to the point where they will take screenshots, GDPR deletions, etc to fulfil the need for that data to be shared. Facebook's reason for not allowing them access is simple, they don't make any money out of the deal.

This seems so ludicrously backwards to me. How is my decision to upload my data to Facebook a meaningful claim that they should run their platform in a particular way for my convenience, even if there's no business justification?

Allow me a tortured metaphor :)

Imagine you own a very popular art museum and you let any artist display their work there for free. Visiting requires a free membership, but visitors must apply for it. Competing museums and galleries are always trying to get a membership so they can come through and scope out which artists are most popular and then woo them away. You reject those memberships.

How on earth is this unfair to anyone?

Now, letting those competing museums in is obviously better for the artist, but does that give them any right to demand it? You're providing this space for them for free! You're providing a huge audience to them, for free! What possible justification do they have for demanding that you let your competition come strolling through and hurt your business just so it'll make their life a little better? If you don't like it, you can pull your work from the museum at any time.

So many of the critiques of companies like Facebook and Apple come down to people wanting all the benefits that these platforms provide, but without any terms, inconvenience, restrictions, or tradeoffs. Ridiculous.


Let's go with the tortured metaphor.

Popular art museums generally run with allowing entry without requiring membership. They make up for that by collecting donations or entry fee, running a comparatively expensive cafe, and ensuring everyone leaves via the gift shop. They might heavily promote, and employ people to individually sell memberships, or give free membership in exchange for monthly gift shop and offers spam. One or two might be a little too heavy handed in promoting membership over visit.

They don't plant a bug in your pocket without knowledge, or consent to learn what other art museums and galleries you are visiting to assess if they are a worthy target of restriction or takeover. They don't insist on knowing your entire contact list and SMS history just to look at the pictures. They don't ban employees of other galleries and museums from visiting. Not least because 9 times out of 10 they would not know if they worked for a different city's gallery.

That's enough torturing. ;)

One of the above is a fair exchange, that can be freely and knowingly chosen. The other is using undisclosed and underhand methods to get extra leverage. That, by definition, cannot be fair. Turns out it's illegal too.

This is one of the very rare cases where I wish the UK had a little more of the US's litigious culture. In my 50 odd years it's the first and only case. Normally I wish for the reverse. :)


If the museum waived the entry fees for users that opted into a having their info tracked and the data monetized there are probably people who would prefer to compensate the museum that way. It'd probably open up access to the museum to poorer people that couldn't afford the ticket. Heck, I'd probably do it. My contact info has almost certainly been collected already, so for me it's basically instant savings. Your main criticism is that the data collection is undisclosed. I'm sure there's debate to be had over whether Facebook was transparent about it's data collection, but they always did tell users about it in some capacity.

How much would Facebook cost if users has to pay cash? I think the Economist did a survey on this topic with Google search and the average price people were willing to pay was $1,500 a year. If Google switched from monetizing user data and advertising to charging money, how much human productivity would be lost because some people can't afford a good search engine? How much would this exacerbate academic difficulties of poorer student if we add the fact that their classmates can search online information more effectively because their parents can afford Google search?

The lack of tangibility of paying for products with personal data instead of money can be irksome, but it has created an unprecedented ability to build large, complex services while delivering them free of (monetary) cost to the end user.


That might be the average price Economist readers would pay. I suspect it would be a lot lower for most users.

That aside - FB never gave users that choice. Where many media services - including AOL, streaming media companies, and newspapers (and the Economist) survive by charging a subscription, FB has never attempted to use that business model.

Why? Because FB has always been a data harvesting and user monitoring/profiling operation that happens to operate a social media front, rather than vice versa.

Ditto Google for search.

And "telling users about it in some capacity" is very different to giving users a list of buyers and full details describing what their personal data was used for.

Realistically, no one outside the industry - and not that many people in the industry - understand where this data ends up and what it's used for.

Pretending otherwise is casuistry and special pleading. There is no way users can estimate the true value of their data, either individually or in aggregate - because the value is determined by buyers who remain hidden and unnamed, and neither FB nor the buyers are obliged to explain any part of the process.

There is no informed consent here. It's a perfect corporate asymmetry, and very much designed that way.


Facebook probably would charge users directly if it were acceptable to do so. Unfortunately (in my opinion) the optics around providing that choice would be even worse than not providing that choice at all. People would portray this as akin to racketeering, making users cough up the dough if they don't want to be tracked. Ironically, concern over Facebook's data collection inhibits their ability to explore direct monetization methods.

Also, the value of this information isn't secret: Google and Facebook are public companies and publish their revenue numbers, no? It's not hard to divide by the number of active users to get an average value per user.


I suspect that Facebook would continue to collect all the data, but just not show ads, if someone elected to pay a monthly subscription.

True, this is what was proposed last year. Essentially analogous to YouTube Red.

I added the membership bit just to make the whole "rejection of competitors" part make more sense.

They don't plant a bug in your pocket without knowledge, or consent to learn what other art museums and galleries you are visiting to assess if they are a worthy target of restriction or takeover. They don't insist on knowing your entire contact list and SMS history just to look at the pictures. They don't ban employees of other galleries and museums from visiting. Not least because 9 times out of 10 they would not know if they worked for a different city's gallery.

I'm not defending all of Facebook's practices. In particular, any surreptitious attempt to collect sensitive user data without permission, or in violation of permission, is terrible. We might differ on what constitutes "sensitive user data" or "permission" but whatever.

Regardless, that's not what we're actually talking about here, and if the museum in question did ask for user consent to a full cavity search before going through, I don't think that actually changes the calculus of what's fair for them to provide to their competitors.


Your metaphor is good, but misses the part where this art museum has either bought or forced downsizing/closure on all other competitive museums. That forces artists to choose this museum or one that it owns if they want any recognition/income from the work they do (this museum certainly isn't paying them for the work).

That would be a fair concern if it were true. But Facebook is hardly the only way to build an audience. What about Twitter, or Snapchat, or just building and hosting your own site and using Facebook as just another promotion channel?

> is not Facebook's data

Look at the T&Cs sometime. Once you upload something, it's theirs. Once they sniff something off your phone, it's theirs. Once you look at something in another tab while FB is open, it's theirs.


GDPR, and even the former 1998 Data Protection Act disagrees. Ts&Cs can't override statute.

https://en.wikipedia.org/wiki/Data_Protection_Act_1998#Data_...


My understanding is that gdpr still doesn't say that it's your data and not Facebook's, it just gives you certain rights to that data such as the right to have it deleted.

Facebook is the Data Controller, not the owner of the data

I’m pretty sure Facebook has made a substantial investment in structuring their data flow & server locations with the specific intention of avoiding the reach/responsibilities of GDPR.

> I’m pretty sure Facebook has made a substantial investment in...avoiding the reach/responsibilities of GDPR

Because they’ve been so careful about following other laws?


Yes, and during so actually broke the law and didn't succeed either.

> Once you upload something, it's theirs. Once they sniff something off your phone, it's theirs. Once you look at something in another tab while FB is open, it's theirs.

According to Facebook TOS, you own the data but grant them an open-ended licence to use said data.


Ownership is a fuzzy word. Are we talking copyright? What do I sell if I sell you data?

As a Facebook user I'm glad they don't grant access to influencer marketing companies. Those companies drive the creation of trash content which would harm the Facebook user experience.

"Sharing or not sharing data with another app is likewise not an unusual decision. FB is not a public utility"

[the following paragraph does not assert that FB is a monopoly, it is agnostic]

The law says that a company with a monopoly needs to act a bit like a public utility. Recall what Microsoft did to Netscape -- was there anything wrong with that? Is a company allowed to take action against a competitor? The normal answer is "Yes". But different rules apply if the government can prove that a company has monopoly power in some market. Actions that are allowed among normal competitors are no longer allowed once a company has near monopoly powers. I think fast growing tech companies get into trouble because the founders doesn't realize how much the company has grown. Bill Gates could use aggressive tactics during the 1980s because Microsoft was still small. He got into trouble in 1994 because he didn't realize how much Microsoft had grown, and how much the rest of the world suddenly regarded it as a behemoth. Therefore he was no longer allowed to pursue the aggressive tactics that he'd previously been free to use.

Arguably, the same reality has now caught up with FaceBook.


> I don't understand why either of those points is particularly controversial or even unusual

Facebook is unusually large and lawless. Their favoring industry incumbents decreases the economy’s dynamism. If you want to compete with Airbnb, now, you must not only launch a better product. You must also curry favor with Facebook. That’s a lot of trust for society to put in a company with a track record of terrible judgement. (This is also why we regulate monopolies.)


Sharing or not sharing data with another app is likewise not an unusual decision. FB is not a public utility

I think this point is, at least partially, arguable. Take "political media." FB is so dominant and powerful in this arena that excluding individuals or parties is very close to denying people their rights to free speech and/or associon.

Idk if you can make that case about apps, because their app platform is not that important, but you could make it about Android or ios... maybe Aws.


> Checking out your competition is pretty standard among all businesses, as is buying out the ones you can't beat.

To be clear-- they marketed Onava as a VPN service to protect your personal data from spying and then they spied on your personal data to figure out what products you were using.


> So it's bad if they do share data, but also bad if they don't?

Yeah. I'm guessing they have a bad business model.


Agreed. Facebook purchased third-party data to guide their acquisition strategy. Any other company could hire this firm and made the same decisions.

The selective access to competing apps, based solely on this information, is antitrust though. If the apps violate rules that apply to everyone, that's different, but that doesn't appear to be the case?

Edit: turns out Onovo had access to specific, proprietary data, acquired through it's own shitty apps, which facebook acquired. This is pretty shady.


> Any other company could hire this firm and made the same decisions

Facebook bought Onavo in 2013 [1].

[1] https://techcrunch.com/2013/10/13/facebook-buys-mobile-analy...


Interesting, didn't know that. Did Onavo have access to data that other public BI firms don't?

> Did Onavo have access to data that other public BI firms don't?

Yes. Onavo built “consumer-facing apps to help optimise device and app performance and battery life on iOS and Android devices” [1] while piping their users’ data to Facebook.

[1] https://techcrunch.com/2013/10/13/facebook-buys-mobile-analy...


So they were basically the spamware apps like flashlight and other 'performance enhancers'? That makes this kind of worse, especially if they continued it after acquisition.

Not really, they built an actual data compression VPN for users with limited data plans and did traffic analysis on the data that went through that pipe.

I agree that the first point seems pretty routine, but the second one is a problem. Look at Microsoft in the late 90's. They changed their system API's and refused to publish the changes for the express purpose of killing netscape. That was later deemed to be illegal. Facebook using their market dominance to kill other apps seems to be very similar.

I think they were mostly using their market dominance to kill apps that decided to live on the Facebook platform in the first place.

If you build an app that relies solely on Facebook login and access to Facebook data as key parts of your business model then are you really a competitor?


Well, if you build an app that runs on Windows like Netscape did, are you really a competitor of Microsoft? Turns out, you might be, if Microsoft want to make their own app. Similarly, Facebook might want to introduce or change native functionality, or monetize their platform, in a way that competes with your app.

That's a good point of comparison but the difference I think is at the time Windows was a dominant OS, and most people's only option. For them to come out and compete with Netscape and prevent Netscape from operating on their OS would be a death sentence for Netscape because it is not reasonable for Netscape's response to be "Fine we'll make our own OS!".

There's a huge difference between "build your own OS" and "build your own website login".

It's not a very tall order to create a social website that doesn't use FB login. Social websites should not by default have a right to all of facebook's users. They should have to build that user base themselves.


Would you say Facebook has monopoly position in social networks?

Unless they're a monopoly...

I agree with the conclusion, but I don't think it should be done through the current antitrust framework.

Your first point and most of the others others may or may not be illegal. If they are legal, what the MPs need to do is their jobs.. make laws. But, most are not directly related to company size and/or market share.

Your second point does sound like a trust and I think you're right to link this to the WhatsApp sale.

But... Regardless of the FB/WhatsApp conclusion, that is a one-off that won't lead to much systemic change whether it's a fine or an order to de-merge.

What we need (imo) is a whole new approach to antitrust that doesn't hinge on a definition of monopoly.

Companies beyond a certain size should just be put under a different set of obligations than smaller ones. They should be assumed to have market power by merit of their size.

When it comes to gdpr and such, these need to be written differently for large companies. Basically, no more equality before the law for companies. What we get in return is rule of law.


ExxonMobil is a huge company. How much market power do they have over the price of crude oil? Not much, because it's a commodity.

Other factors matter far more than size.


In theory and on paper.... Making that into a functional legal system, hasn't happened.

In any case, half my point is that pricing power is not the operative definition of market power anymore. If xonmobile can/do have all sorts of influence on governments, job markets and such. They can do tax stuff a restraint can't.

What I am proposing is that the definition of a monopoly is not useable, for policy. It's fine as an economic & academic concept, not as a legal one.


Large companies aren't bad just because they tend to monopolize the market they operate in. They can also lobby far more effectively, then there's the issue of the job market competitiveness etc.

A world in which companies are not allowed to selectively withhold proprietary data from some competitors, and form partnerships with others, is a world that suboptimally promotes innovation and economic growth.

That's a statement that requires evidence and proof.

Market fundamentalists are so used to advancing arguments like this without being challenged that they have become almost completely divorced from the scientific method entirely.

Maybe being forced to share this kind of proprietary data, in this specific type of circumstance, will actually promote a more diverse marketplace with higher innovation. Maybe it won't.

Either way you don't just get to assert it.


You're not a "fundamentalist" because you mention the fundamentals of a field of study. It's impossible to have a discussion about anything if some things are not taken as axiomatic.

And given that the person you're responding to is stating something that most economists would agree with, I think the impetus is on you to tear it down with "the scientific method" if you think the prevailing wisdom is wrong.

TLDR: don't provide proof for flat-earthers.


I have a degree in economics and I disagree with it.

If you were in fact an economist yourself, perhaps you would have recognized market fundamentalism as a term with a specific meaning, often used by economists:

https://en.wikipedia.org/wiki/Market_fundamentalism


That's not a term of art in economics.
ionised 9 days ago [flagged]

No, you're a fundamentalist if you stick to an ideology (e.g. trickle-down) in spite of all evidence suggesting it is highly flawed.

> It's impossible to have a discussion about anything if some things are not taken as axiomatic.

You mean you personally find it impossible to have a discussion with someone who disagrees with you on the axioms of economics because it means you can't frame the argument in your own terms, which would be that there is no alternative to what you are espousing.


You don't actually know my views or what I'd espouse, so I'm curious about your hostility here :)

My only point was that you may have better success railing against the conventional wisdom by providing an actual argument that it's wrong, not by ranting and raving about the existence of prevailing wisdom itself, and demanding that everyone provide a detailed proof every time they reference it.

You should check out the works of Milton Friedman, you might find them interesting!


I'm very aware of Friedman's work and I think it's time we all stopped listening to the words of that fascist-apologist hack.

A world in which data monopolies and platform companies selectively withdraw much smaller companies' access to their services based on whether they wish those companies to fail or not probably isn't one which optimally promotes innovation and economic growth though.

Facebook is not a data monopoly, unless you take "data monopoly" to mean "has a set of a data that no one else does", in which case everyone is a data monopoly.

Nobody else has a comparable dataset on a comparable scale. Microsoft wasn't the only company supplying operating systems in the 1990s either.

Comparable dataset of what, exactly? Of Facebook's internal user data?

Think about how much data Amazon has about all the suppliers and products they've sold over the last twenty years. Should they be forced to hand it over to anyone who wants it just because of the fact that its size and exclusivity makes it very valuable?

What about the governments databases on everything under the sun? Should all of that be public?

Maybe the answer to the above questions is yes, I don't know, but I don't think it's axiomatic.


Nobody is suggesting everyone has a right to Facebook or any other company's data (just like nobody seriously suggested MS shouldn't be able to impose any restrictions on OEM Windows installations). They're suggesting that if a company sets itself up as a provider of data and other platform services to encourage everyone else in the market to become dependent upon it then selectively pulls the plug on the most successful companies or forces them into deeper partnerships, it's definitely harming the ecosystem. And that if Facebook has done that, it might also have fallen foul of laws which are there to stop that sort of practice.

I definitely see those arguments being made (in this thread, in fact), but that may not be the argument you're making.

I think the issue I have is with the idea that Facebook is successful, and therefore can no longer prevent their competitors from blatantly taking advantage of them. But I'm not sure where to draw the line.

If Visa and Mastercard decided for whatever reason that they wanted to kill some small banking chain and not allow payments for them or their customers to go through their network, that would be bad. But I don't think that means that they have to let an upstart competitor credit card startup have access to the huge multi-billion-dollar network they've built up, does it? Even if they might extend that network to other entities that don't compete with them?


I'd be pretty concerned if Visa introduced a Visa Mobile Payments platform and withdrew functionality from the fastest growing payment apps because the mobile payments sector broadly competes with the credit card sector. I think that's a closer analogy than either of your examples

Well, that'd be the only use case for that platform, right? Maybe a better example would be if they explicitly told companies who joined the platform that they couldn't compete with Visa's core business, and then Mastercard signed up because they have a mobile payments division. Would Visa be justified in kicking them off?

That's not what a monopoly is.

I'm not advocating for anything here, but isn't hurting innovation and economic growth exactly what FB is doing by snuffing out promising products based on analytics?

I think when you choose to sell your products on Amazon, or host your videos on YouTube or use FB login for your app, you're necessarily taking a risk by depending on a third-party for your success. It's possible to create your own login system or host your own videos or own your own e-commerce site and there are even a whole host of valid competitors for all these services too.

Would it be better if, instead of analytics, they flipped a coin?

Yes, because if they had to flip a coin they would buy fewer companies, which would lead to less market consolidation.

I know you're being facetious, but it might actually be?

In nature, random mutation leads to greater genetic diversity and speciation. In some mathematical optimizations, introducing random noise can help avoid getting stuck in local maxima.


Using analytics to restrict apps based on their popularity - if that's what Facebook did - is definitely much more anti-competitive behaviour and even more likely to hurt growth than simply being arbitrary or completely random in their decision making. But yes, a non-arbitrary policy of allowing developers access if they followed a transparent set of rules would obviously be better.

Economic growth is overrated when it only goes to 1% of people.
najarvg 9 days ago [flagged]

Hidden in a lot of these economic growth arguments is an unstated assumption of the effectiveness of trickle-down economics which itself has been shown to be ineffective many times (see - https://www.thebalance.com/trickle-down-economics-theory-eff...)

And hidden in a lot of these anti-economic growth arguments is an unstated assumption of the effectiveness of planned economies, which itself has been shown to be ineffective many times :)

Are those the only two options you can see? Trickle-down economics or communism?

Planned economies need not be communist, and no, but I was attempting (probably poorly) to draw attention to the ridiculousness of the argument that being pro-economic growth requires believing in trickle down economics.

A world in which companies are not allowed to selectively withhold proprietary data from some competitors, and form partnerships with others, is a world that suboptimally promotes innovation and economic growth.

Maybe economic growth, but how is innovation promoted by limiting access?


This my be true as a rule of thumb. Different rules apply to monopolies (in most developed countries anyway), as different dynamics come into play and hinder innovation.

... so long as everyone is complying with the law.

That's a rehash of the argument against the existence of proprietary tech (and patents).

It also has the same limitations: If there's no competitive advantage to having proprietary data, then what's the point of developing the tech and services to try and obtain it?


I think you've answered your own question.

There wouldn't be any point, and it would be totally fine.


How so?

How do you force a split when one part (Whatsapp) may not have enough revenue to sustain itself?

I can't answer your question but i think you should be more specific.

Your phrasing obviously comes with the interpretation that if a split-up would bankrupt a part, it shouldn't be done. Why should this be taken into consideration?

Secondly. It has value. A different party could invest and keep it going. Why is it important that it sustains itself?


If the objective is to increase competition, by making WhatsApp into the Facebook competitor they currently aren't, you need them to stay in business.

Of course, if you just wanted to punish facebook by destroying shareholder value, or you're certain WhatsApp users will move to an FB competitor rather than to FB itself, you might not care if WhatsApp stays in business post-breakup.


As far as I'm aware, the only thing WhatsApp competed with Facebook over is Messenger, and even that seems like a stretch to me.

My impression of the situation is that Facebook bought WhatsApp for its data and user-base; not because it was competition.


I guess it's not like anyone is employed to work on Whatsapp or anything. It's not like millions of people use it to stay in contact with their families every day.

You split and one part dies. If the commission involved put up rules on the merger and FB broke the rules, this is what happens.

Whatsapp has enough of a userbase and a usecase that enough people will be willing to pay to sustain it.

What _will_ happen is that other free services will pop up and be used, and some users will migrate to those. Which is a good thing. We'll have actual competition on the field instead of a single dominant player.


I've long held a theory that many of the "basic infrastructure" pieces of technology would best be put forward by non-profits. The profit imperative can be an insidious corrosive influence.

When the WhatsApp acquisition happened while they had only a few dozen employees and were reportedly profitable off of their $1 per year business model, it was clear to me that the days when a small non-profit could run the world’s chat infrastructure weren’t far off.

That the Signal Foundation is now basically funded by interest earned on money made from that very WhatsApp sale is quite poetic.

https://www.google.com/amp/s/www.wired.com/story/signal-foun...


OT: Similar for health, in the UK you can see once some parts were privatised more and more gets privatised - not surprisingly, these companies have a duty to grow.

Why should we support exploitative business models?

Why should we care if WhatsApp can't sustain itself?

I'll happily chip in a dollar a year or so.

Seize it and then sell it in an auction. WhatsApp is so insanely popular that it will for sure find a buyer.

Let it fail?

Same way you do when they may have enough revenue.

Can you really blame FB for not providing API access to competitors?

If you want make Farmville with FB login go ahead, but if you try to make "FB 2.0" with FB Login and all friend connections preserved but keep all the ad revenue yourself obviously Facebook is gonna put their foot down.


The good news is that Onavo was removed from the App Store. For some reason Google seem to be happy letting it stay on Android though.

"Don't be evil" -> "Don't be the most evil"

> A fine (no matter how big)

$200 billion, and a review in 12 months time?

That's 5 years revenue. $100 per user, sounds reasonable. Facebook can put in an easy "claim your $100" button on their site. Would that just be a "cost of doing business"?


Interesting challenge: how to design that button to make it conform to mandated guidelines on its appearance, size, position and wording whilst also making it look enough like the sort of probable-scam advert users generally ignore to limit the number of Facebook members who actually claim it. Facebook probably has data to help them with that too :D

would you take that money?

yes, sure. I'm not a FB shareholder. I'll take the 100, and the GDPR them to make them forget that I ever took the money.

Except you couldnt GDPR them, as they are allowed to save data if its business critical, which I would argue "XYZ took the 100$" is

> A fine (no matter how big) will be seen by FB and its investors just as “cost of doing business"

Depends how big you think. FB had revenues of $40bil. So let's fine them $50 billion.


Anti-trust. Anti-monopoly. We needed to revive these movements five years ago.

> Facebook's refusal to share data with some apps caused them to fail

It's not clear what this means. Is it an API that returns '403 forbideen' or more of a case of lack of competitive information


https://www.digitaltrends.com/opinion/facebook-cuts-off-acce...

> Yes, within a day of Vine’s launch, Facebook pulled the ability for Vine users to find Facebook friends who were also using the new app.


Thanks
More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: