> Facebook email 24 January 2013
> Justin Osofsky – ‘Twitter launched Vine today which lets you shoot multiple short video segments to make one single, 6-second video. As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision.
> MZ – ‘Yup, go for it.’
1. If you look at the docs, that's from Exhibit 44 which indicates that it's actually an excerpt from a messenger discussion, not email
2. Twitter had previously blocked both Instagram and Tumblr in the same way
3. Facebook had previously blocked Twitter in the same way
4. In some of the other docs here, you can see that there was much more discussion about what their policy should be around reciprocity and apps competing with facebook's features
5. The first line of that indicates that there was likely discussion/planning about this before that conversation
And for the record, Facebook did not "share data with Cambridge Analytica for shady research purposes". A rogue third party developer created one of those shitty quiz apps for Facebook, and then proceed to get users to signup for it; several million did, which allowed said developer to harvest data thanks to the very permissive APIs that Facebook provided at the time. He then proceeded to sell this data to Cambridge Analytica. Facebook has a responsibility in what happened there, but "Facebook sold data to Cambridge Analytica" is a widly misconstrued story.
This isn't true. Lots of companies wouldn't steal users' call logs - eg, Mozilla, Signal, and plenty of boring, normal ones who make TODO list apps or whatever.
It also isn't relevant. See how that argument flies in criminal court. "Anybody else would have stolen that car."
What we see here (again) is that FB does nasty things and it's in the public interest to stop them - along with "any other company" who does the same things.
All this anger over Facebook is ridiculous. Now, if you want to talk about Android and Google's decision to make it more difficult to not only control but to know what data apps (especially theirs!) will take, that's a different matter....
 Especially from geeks, and particularly geeks from the 1990s and earlier when we were told that unless we promoted non-centralized publication models that we'd see the very constellation of centralized, user-antagonistic, profiteering services we now have. Whenever someone says, "but why would you want to host your own e-mail/web/chat server, my head wants to explode." It's always "why would you" (or "why would you, it'll never be as good as GMail/Facebook/Twitter/etc"; never, "maybe I should promote and help work on projects that make it easier".
We are all rogue third party developers, and clearly, the priority, much like most businesses is to make a profit, and the customers/ethics come second.
I love the fact that you get bent out of shape that Facebook didn't sell it though, it's a theme I've seen with Facebook employees: "But we didn't sell it!"
I'm not sure if they are lamenting the fact they didn't sell it but I sure as hell can tell what they give a damn about. If you want to hide under "anybody would have done it", lets take a trip down histories gravestones and figure out whether or not we should bother trying to do the right thing because it is the right thing to do.
Dubious, but still good point. Meaning legislation to force companies to behave slightly less unethically is all the more direly needed.
thanks to the very permissive APIs that Facebook provided
Why did they do this?
The discussion revealed in this release is pretty fascinating. For example, you can see that at some point Zuck's friends authorized 31 apps and 76% of those apps had "read_stream" access giving access to their entire newsfeed.
Through one lens this is Facebook locking down their API in an anti-competitive way, which is somewhat true, but mostly this feels like an API change making privacy improvements for users. (The Cambridge Analytica data came from an older app that was running before these changes were made...)
This is the same elision they use. My question was, in the face of almost two generations of awareness of the principle of least privilege (almost typed "lead" again!), why did they design the API so that it gave away so much information and data in the first place?
Through one lens this is Facebook locking down their API in an anti-competitive way, which is somewhat true, but mostly this feels like an API change making privacy improvements for users. (The Cambridge Analytica data came from an older app that was running before these changes were made...)
Read the "Whitelisting" section. The only change they mention is turning off the ability to request permission to access the now-problematic data and information (let's say "D&I"). Of course, we also know that this is selectively applied. That's not "somewhat" anticompetitive, it's not necessarily different that the CA problem, and at any rate is only a marginal privacy improvement for users because there's (my estimate) no way in hell they're going to tell us who still has access to the APIs.
In April 2010 basically everything a new user posted to facebook was public by default. They didnt "care deeply about peoples privacy."
Facebook is far from blameless, and should be held to account for its scummy behaviour, and for enabling scummy behaviour.
Facebook is not a good actor, anyway you look at it. They are selling data to first or second parties, who are using it to damage our country.
Funny that the committee ended up being the open ones.
But being forced to give away half your physical retail space is hardly the same thing as just letting them keep using an API that you provide explicitly for such use.
Also, more broadly: one would have quite a hard time making the case that Facebook isn’t nakedly, gleefully, and rapaciously user-hostile.
I think a lot of people who hate Facebook just have a hard time believing that most people just don't care about the same things you do, or to the same degree. They're still on Facebook and Instagram and Whatsapp because they see the world differently from you.
In this case, Facebook was deprecating the API and declined to provide special whitelist access to a competitor.
No? Where are you getting this read from? The documents clearly show them discussing it from a year prior to shutting down Vine's API access, and planning on announcing it publicly ~6 months prior.
I can't find anything from a quick google search on when the API deprecation actually took effect, but assuming the timeline from Exhibit 43 is accurate, Twitter actually had whitelisted access for over 3 months before being shut down.
Inch upon inch of columns are dedicated to their morning habits, favourite TV shows and fashion choices, and other fluff content to make them "relatable" to the average joe/jane. This is especially magnified when it comes to SV execs because they wear hoodies and tshirts instead of bespoke suits.
And that's what leads to reactions like "I can't believe he'd be so callous to users", as if the person in question is a hard working bootstrapper and not a billionaire looking to maximize market share and profit.
CEOs of massive companies don't have time to write long and explanatory emails. They put people in charge that they trust, so they can just say one word or sentence and know that it'll get handled.
I don't like Zuck, but come on, you just described every CEO in America, that when they have a choice they will do this.
Let's not normalise sociopathology, even given its prevalence amongst business executives.
Maybe he had ulterior motives... Don't know. But opening up patents is certainly the opposite of squashing innovation.
So that another one off the “CEO billionaire but not a sociopath” list.
Here's 4 other sources. There are many more on DuckDuckGo News.
> The apology comes after the spelunker, Vern Unsworth, who was involved in the early days of efforts to save the now-rescued boys’ soccer team, threatened legal action against the billionaire executive over the comment.
> Musk said on Twitter late Tuesday that he had made the claim out of “anger” because Unsworth had criticized his idea to rescue the boys with a “mini-submarine” made out of a SpaceX rocket part.
Burying competitors is a good thing. That’s the whole point. That’s literally the objective of every business in the entire economy.
Can you help me understand how you make world smaller place and connecting people by "shut down their friends API access" ?
If you did read, would you suggest other HN users read it?
>Michael LeBeau – ‘He guys, as you know all the growth team is planning on shipping a permissions update on Android at the end of this month. They are going to include the ‘read call log’ permission, which will trigger the Android permissions dialog on update, requiring users to accept the update. They will then provide an in app opt in NUX for a feature that lets you continuously upload your SMS and call log history to Facebook to be used for improving things like PYMK, coefficient calculation, feed ranking etc. This is a pretty high risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.’
>Yul Kwon - ‘The Growth team is now exploring a path where we only request Read Call Log permission, and hold off on requesting any other permissions for now. ‘Based on their initial testing, it seems this would allow us to upgrade users without subjecting them to an Android permissions dialog at all.
This is huge, doesn't this make google guilty as well?
>‘It would still be a breaking change, so users would have to click to upgrade, but no permissions dialog screen.
Sidenote: I've noticed via umatrix that Netflix on pc, during a show, is attempting to load fb js... Netflix wtf!
So this might be what you’re seeing, but normally it’s included in a precompiled JS application bundle.
If their application only needed to run on newer Android, I think they could rely on runtime permissions and not request this permission at all unless the user actually turns the feature on - but even now about a third of Android devices in use are on versions too old to support this.
I'm not sure I follow. An app can request permissions, and the user can allow or deny them. I don't understand how this puts guilt on Google. Can you elaborate?
at least, that is how I am interpreting it, it seems that the functionality of their software is not functioning in the 'spirit' of what it is suppose to be doing.
Even when they added a sane permission model in Android $VERSION, developers were allowed to bypass it for years by just building apps targeting Android $VERSION - 1 instead.
Google's web security may be the best in the world, but Android security is a disgrace and they should be called on it. (Fuschia may put them on top of the world if they ever switch Android to that, but we'll have to see whether that happens.)
So did this change? I installed Messenger recently and this is pretty much the first thing it requests (no thanks). It also asks to let people search for you by number (no thanks) and to sync with your contacts (no thanks, smells like LinkedIn).
I have zero permissions enabled for Messenger, so I guess it would then ask before uploading my call logs?
What's "PR" here?
> Facebook had been aware that an update to its Android app that let it collect records of users' calls and texts would be controversial. "To mitigate any bad PR, Facebook planned to make it as hard as possible for users to know that this was one of the underlying features,"
People are complex. They are more complex than an action, or even a group of actions. To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them for the sake of making the world simpler. It is a poor model, and in a Dale-Carnegie-way it leads to poor outcomes, as you close the dialog with that person that allows you to change opinions and outcomes. It is the same with groups or companies: some parts of groups do good from some vantage point, some do bad.
I found myself inspired by a lot of what Facebook did. I loved working inside Infrastructure there, I was amazed by what people were innovating on every day. Projects like charitable causes have raised a lot for charity. I've seen the Are you Safe feature reduce so much stress during disasters. I keep meaningful dialogs going with friends I don't get to see often on FB and Instagram. It makes me really happy to see my friends thriving.
One of Napoleon's great gifts was in compartmentalizing pieces of his life. His tumultuous and frankly soul-crushing personal life (which affected him deeply) with Josephine never got in the way of his military victories. I wonder if that's a good model, up-to-a-point for people and groups. With people, by compartmentalizing some unsavory perspective someone has, you have the ability to change it later on through discussion.
... in groups: if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.
Correct. But we're not talking about people here, we're talking about a corporate entity as a whole. It's a bit more complex than a person. It has an incentive model and a set of norms (culture) which enables people to do good/bad based on the situation to maximize their personal gains (whichever those are - personal, material, spiritual, you name it).
If this consistently enables people to do what is perceived externally as unethical, we have a problem.
Nice spin on it, but Facebook is still a bad actor (despite what you say). I held this opinion since the beginning, before all the leaks and all the scandals but nobody believed me. Very smart people were incentivized to join using the entourage (come work with other smart people) and money, then slightly brain-washed in a cult-like manner (us vs them, wartimes, etc..). This enabled them to ignore blatant unethical behavior. I've seen all of these first-hand and it's bad. Bad bad bad.
This needs to be a conclusion, not a premise. I do not have a firm belief about whether or not this is true, but I keep seeing people list bad things about FB and draw a straight line to “thus it is evil”. Ideally, we would list the good/bad it does and assign weights to these points to determine if it is net harmful.
I want to be convinced, HN, but when I read comments like the ones in this thread (FB has no positive value whatsoever for its users, FB sells users’ data), it makes it hard for me to appraise.
In general, I agree, but the context here is dozens of HN commenters claiming that FB is a bad actor/net negative for the world. These commenters cite lists of gripes about the company, many legitimate, some illegitimate, but get hand-wavy when someone asks about the "net" in "net negative".
I'm not rejecting the conclusion, here, by the way! If the company is ultimately bad for society, this line of questioning should embolden the consensus HN opinion.
Faceboot directly optimizes to increase time spent on their site, wasting human potential.
Faceboot also provides an effortless way to check in with loved ones after a disaster, easing human suffering.
Two people will look at both of these facts, value each one differently, and come up with a different "net".
Furthermore, it is not appropriate to give Faceboot all the credit for either facet! People could check in with text messages, and people would be in dopamine loops even with software purely under their control.
So the only thing we can really do is discuss each facet independently, in the context of first principles / morals.
I personally think much of what is wrong with Faceboot is due to the conflict of interest from a third party mediating social relationships, and the inhuman scale of centralization. But the more that comes out about inherent social media narcissism, I also see that there is no silver bullet.
In practical terms, coming up with qualitative points and weighting them in a good-faith-but-ultimately-arbitrary-manner is good enough, and something that everyone does constantly. For example, I suspect the average HNer weights "wasted human potential from FB" more than "benefit of disaster reporting from FB" (which I also agree with - however "benefit of disaster reporting" almost certainly outweighs "detriment from nebulous analytics firm scandal").
It's not hard - it's impossible. Even we fully agree on a specific metric, a situation still has to be weighed against possible alternatives and integrated on timescales longer than our lives. A very basic example of this is a company optimizing for "profits", but they're actually optimizing for short term profits at the expense of going out of business two years later. The future is unknowable in the same exact sense that every NP-hard problem is. Heuristics are the only way to tackle this.
> coming up with qualitative points and weighting them in a good-faith-but-ultimately-arbitrary-manner is good enough, and something that everyone does constantly
Of course, which is why I'm referring to "Faceboot" even while recognizing that people get utility out of it. I just think that focusing on arbitrary pairs of specific facets is already headed down the path of madness, which is why I stated my initial critique in terms of basic principles.
Yes. This is why every single comment of mine in this thread recommends a specific heuristic with which to tackle this exact problem.
How do you weight the individual criticisms, or individual benefits to achieve an objective positive/negative score? How do you weigh "installing spyware to see what companies we might buy" against "share photos with nan"? How does the Facebook "only visible to people you interact with enough" algorithm affect the previous question's balance?
Without the weighting I think we've easily reached the point where the number of negative revelations is incessant and the benefits declining (thanks in part to that previously mentioned feed algorithm).
Such discussion wasn't required of anyone before they were allowed to express support or praise, so why would it be required from anyone who reached the point of boycott or criticism, before they are granted that they have reached that point?
I mean, it's fine to ask people why they think what they think of Facebook, but why refuse to accept others are moving for good reasons, maybe for better reasons than others have for not moving -- and ask them questions while letting them pass, without attempting to delegitimize them until they "explained themselves"?
What makes you think it's a premise and not a conclusion?
> Ideally, we would list the good/bad it does and assign weights to these points to determine if it is net harmful.
I have 11 years of data points and experiences about FB, I'm not going to enumerate all of them whenever there is another. I'll just say "typically fucking Facebook".
At this point, I don't even feel obligated to remember it all -- I can trust myself enough. We do that with "evil" people in our lives, too. We don't remember every dirty detail. We remember that there were a bunch of things, and that overall, we had it at some point. I save the conclusion and the checksums and that's enough.
If you think I'm operating on a premise, instead of having come to a conclusion, how is that not you operating on a premise?
> I want to be convinced
Maybe, maybe not. What you are doing is delegitimizing even the conclusions others arrived at, by simply calling all of that mere premises. You saw a bunch of posts that struck you as knee-jerk, so all of it is knee-jerk.
You have to form your own opinion either way, that burden is not on others. Do you also expect anyone who says anything positive to give some kind of thorough, 1000-page assessment of all the benefits and cons? No, of course not. Same goes for criticism.
I for one don't care about the "evilness" of people I never met. For me the harm done through ignorance or fear or "evil" (which is just another form of weakness really) or not caring enough is the same harm.
> If you think I'm operating on a premise, instead of having come to a conclusion, how is that not you operating on a premise?
A mental conclusion can be a discussion premise. It doesn't invalidate your conclusion to say it should be a conclusion not a premise, because you're asserting a premise in a discussion which you are not (yet) supporting.
Also consider that you have seen eleven years of data points and experiences from the point of view of a small subset of users; there could perhaps be an equivalent cache of positive datapoints which tend to be significantly less interesting to report on.
Thus, supporting your point with concrete examples is how you contribute to a discussion, because then you and any adversaries can challenge you on the merits of your argument.
That's what the conclusion/premise separation is about.
You know what a thief can be like? 99.99% of the time, they don't steal. They sleep, they brush their teeth, they do all sorts of stuff, and every 2 weeks they take all the savings from an elderly woman.
How often you do need to see someone doing that to consider them a thief? Would you really care about any positive stories after seeing what you saw?
> That's what the conclusion/premise separation is about.
You can't speak for that other person. Let them respond for themselves.
Sounds like you're more interested in competing with someone than talking about ideas.
> Would you really care about any positive stories after seeing what you saw?
...yes? Of course? I don't automatically dehumanize that hypothetical person for their deeds, whether I approve or not, or believe there should be consequences. Like, doesn't Facebook collaborate with law enforcement in tracking down predators and scammers and the like? It's not as simple as "bad. go away."
You should remember enough to make a proper argument, dude. A solid conclusion needs solid support.
No, I want to talk about the idea they expressed, not what you read into it. I can only do that with them.
> ...yes? Of course? I don't automatically dehumanize
Who's talking about dehumanizing? How is considering someone a thief dehumanizing?
edit2: Facebook is a company. It can't be dehumanized, it's not a person in the first place. People in it are responsible for what they do. Someone who fought shitty decision and then left is different than someone who, say, hires a firm to smear critics. That goes without saying as far as I'm concerned. But my thief example refers to Facebook, you see? Just because apparently my argument isn't easy to follow for everyone, doesn't mean it doesn't stand.
So, where is the dehumanization? Who is being dehumanized when someone comes to the conclusion that FB is on the whole "bad"? Because we're not appreciating all the good, supposedly? When someone is a thief, or a murderer, or a company is, then all their fantastic properties they may have is interesting for their personal friends. But not to the police, judges, or wider society. They know that the person has probably a lot of reasons for how they became that way, and nice sides to them, but they already have their own friends, it's simply completely out of scope of the subject at hand, unless it's directly related to the "crime".
> Like, doesn't Facebook collaborate with law enforcement in tracking down predators and scammers and the like?
Yes, and that thief who sometimes robs elderly women who then freeze to death outside, also has child, and he's very great with that child, and he's singing in a choir, and all sorts of great things. But you don't judge a meal by the freshest ingredients, but by the most spoiled. You judge a person by their worst deeds, and likewise a company. Again, we're talking about judgement with a capital justice here, not being friends, thinking we're better, or thinking they're evil and we're good, or any of that.
> You should remember enough to make a proper argument, dude.
I think my argument is just fine, and it even seems to get to you a little.
edit: And what post of mine are you even referring to? Where did I make an argument without examples? I was responding to someone else complaining that everyone who thinks Facebook is "evil" (let's just say bad) is operating on a premise. I was responding to that general point, I'm not decebalus1, who in turn didn't have "Facebook is evil" as their main point either.
Their main point, if you would follow the guidelines, hasn't been addressed by anyone. Their main point is the first two paragraphs, the rest is bonus. How come you are trying to teach how to "make a proper argument, dude", but didn't notice that?
Oh, and clicking buttons instead of reasoning kinda gives away who is interested in discussion, and who is interested in dehumanization and censorship.
Conducting psychology experiments on people without their consent (or awareness of it)
Documented evidence that Facebook as it is now decreases people's quality of life, but they don't want to mess with their formula because that's what brings the dollars in.
Theres many more. But these two are the biggest I can think of off the top my head.
Compartmentalization is part of the problem, not the solution. It's why some people can exploit and hurt a hundred thousand other human beings in the morning, enjoy their lunch break, fuck up the environment a bit more in the afternoon, and come home to a happy evening with their spouse. We shouldn't be encouraging more people to separate their private happiness from their interactions with society.
> ... in groups: if everyone good leaves organizations because they're "bad," well, those organizations will just be filled with the worst of us soon enough.
The hope is that if enough people leave, the organization won't be able to keep functioning. Or, if it survives and becomes an evil-people-filled cesspool, it'll be easier to direct regulatory actions against it and just shut it down.
Also, Napoleon was famous, but not exactly a paragon of morality.
This, this and this again. We as people are absolutely allowed to be complex, contradicting beings, but it is because of this richness of breadth and depth in our characteristics and beliefs that we ought to consider deeply the consequences of our actions.
Compartmentalization acts effectively in the opposite direction of that.
Some people have determined that Facebook is an organization that's producing some bad results. Socially shaming employees, removing the status they might have hoped to acquire and spiking attrition are tools being deployed to change Facebook's behavior and the behavior of Facebook's competitors.
You might claim that this pressure will never work, or that it will cause Facebook to shut down the "Are You Safe" feature, or gut the charity tools, or avoid developing these kinds of projects in the future. But we should avoid hand-wavy claims that Facebook and the people who work there are just too complicated to influence.
So if we're nicer to facebook, maybe it'll decide to not spy on people quite so much, undo the engineered-for-addiction notifications and feed, and curb it's anti-competitive practices (but not give back any market advantage it has already gained through them, of course)? The executives will voluntarily decide to make the company earn less money, so they can be more ethical?
Is there any precedent for a large corporation acting in this way?
Is Facebook universally bad? No. It's done lots of good things. And lots of good people work there. But Facebook is also doing lots of bad things. Your policy chief hired a PR firm to push antisemitic conspiracy theories about George Soros. "People are complicated" isn't an excuse.
From the ad campaign run by Facebook in 2018:
"...From now on, Facebook will do more to keep you safe and protect your privacy, so we can all get back to what made Facebook good in the first place: friends. Because when this place does what it was built for, then we all get a little closer."
To anyone looking closely at the actions of executives and living outside the SV echo chamber, this statement is laughable.
While the actions of the many inside the company have been noble, they too have been taken advantage of just as those who used the platform for so many years.
Also, the Napoleonic Wars killed 3 to 6 million people.
Facebook as an entity might be run by great people who love their families and support their communities, but the entity has done tons of very sketchy things and anyone who willingly or knowingly takes a part within that, especially for personal benefit is responsible for the actions of the group as a whole.
Humanity might transcend class and social status, but it doesn't excuse anything.
And that is on top of "The Responsibility of Intellectuals"
Is this not potentially an optimal outcome?
If companies who have a track record of doing unethical things suffer for it by being depleted of all employees who are not either unethical, incompetent, or both, this should dramatically compromise the effectiveness of the company. If this happens repeatably, it provides a disincentive for unethical corporate behavior.
At best it's invisible, at worst it just kicks people out with no recourse, because some troll reported them, and because they're not a FB employee or a celebrity. These people are people, too. You barge into the public and drag it onto your platform, and then you don't just dehumanize people on it, you completely remove them. The rest either is friend or not friend, blocked or not blocked, posts are liked or not liked. Yes, it's a very poor model -- don't assume everybody else is using it, too.
> It makes me really happy to see my friends thriving.
That's the equivalent of "it's fun with friends" for game reviews. Every multiplayer game has that. Every site that allows sharing of photos and text has that. Because people have that.
> With people, by compartmentalizing some unsavory perspective someone has, you have the ability to change it later on through discussion.
When later? When was Facebook ever up for candid discussion? How come you blame it on those who, in absence of any say, ever, now think badly of Facebook?
An interesting analogy... The 10th amendment to the Bill of Rights was intended on allowing each state to be diverse in its laws. If people in state x want the laws in state y to change, a constitutionalist would say that you should move to that state and work in changing the laws from the inside, via the 10th Amendment and the state's own Constitution. If you don't want to put in the work of changing the state from the inside, and would rather change the laws from the outside (i.e. via federal authority) then you're probably an authoritarian.
I guess the only point I'm trying to make, is yeah, it's hard. Thanks for your contribution.
And then the organization becomes bad enough that it crumbles and dies.
As opposed to when? When you worked there? When was this period that Facebook was a beacon of ethical behaviour?
> to explicitly dehumanize them for the sake of making the world simpler.
Like aliasing people into a point on a social graph and a wallet? Dehumanising behaviour and monetising that is the main ethical problem with Facebook. It's a bit rich to accuse opponents of over simplification.
>It is a poor model
FB market cap seems to disagree. Pidgeon-holing people for profit is an exceptionally lucrative model.
> Projects like charitable causes have raised a lot for charity
And the trains in Italy ran on time!
You worked there for five years and never had a clue. Maybe you're not the right person to provide a judgement on judgement.
...and because treating them differently based on their actions may cause good people to apply their skills to endeavors other than growing Facebook’s wealth of private data?
>>To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them
The only reliable method for evaluating someone’s values is to assess the choices they make. What values do their actions suggest? Do these choices affect your ability to trust these people?
I fundamentally disagree that it dehumanizes to do this, and I also fundamentally disagree that anyone intends to dehumanize them with this approach. I find it troubling that anyone expresses such a black and white assessment of people trying to determine where others should fit in the herd.
Put another way:
the only way to judge someone’s ethics is by evaluating their action and comparing it against their words.
This is just complete rubbish and it appears you are still rationalising the time you spent working for the company. If you are trying to achieve 'good' inside a company like Facebook then you are a complete sucker or, as Zuckerberg would say, just a 'dumb fuck'. You're being played. Facebook is not interested in doing good; that has been abundantly clear since almost day 1.
To take a person and alias them into being "good" or "bad" based on an action or a series of actions is to explicitly dehumanize them for the sake of making the world simpler
Baruch Spinoza more or less invented the concept of ethics in the 17th century, which (partially, and simplifying) is premised on the effect of "an action" being good or bad. If this doesn't reach, let's go back 2,000 more years to Jainism, in which karma is comprised of particles in the universe that stick to a person through their actions (good/bad/undefined).
tl;dr: If your argument is that a person's actions should have no bearing on a sense of "good" or "bad," know that significant parts of the entirety of recorded history contradict you. [insert your own Godwin reference here]
Point being, calling someone "bad" or "good" only speaks to the preponderance of evidence one way or another, and it's absolutely within all of our rights to have an opinion on what FB/Zuck/SS have done with the aspects of our lives to which they have had access, down to discrete decisions. I mean, I don't think anybody would say that FB has obviated the concept of reputation, and having an effect on society doesn't only refer to the good. And really, I don't see anybody saying "Zuck = bad" nearly as much as I see "Zuck = good" and "Zuck has done bad things and made bad decisions." He's the face of the company, this is how society works when you're not moving fast and breaking things.
It's not about people being complex, it's about people shitting in the world's PII sink at the party. Specific people, with witnesses. These were actual choices made under consultation with many people, all of whom are extremely highly-compensated, to reduce the control you have over information about your actual life (what focus groups pay big money for). It's not outlandish to say that the ability to trade pictures with and keep up with people has not been a fair trade.
Forgive me if I discount the hosting of online charity facilities (and really: ), and the supplanting of phone calls and neighborly phone trees in times of disaster or crisis, but in the words of Nelson Muntz, "If you hadn't done it, some other loser would have. So quit milkin' it," and maybe that other loser would have protected their users more. (and please don't insult us with anything along the lines of "personal information doesn't have value until someone sells it")
To conclude with an even larger shadow, FB has made a shitton of money from these policies, money they use to hire people and pay them enough not to work at other companies that also pay a lot, and this money has been used to drive up housing prices and rent for their employees' convenience (nobody overbids for fun). Thus this infrastructure of data scams affects the ability of teachers to live near the schools in which they teach, police to live among those they patrol, and low wage workers to not have to commute 2+ hours each way. I don't know if working at FB inures you to these examples, but it seems clear that FB leadership does not display the maturity that I personally would like to see in those who have the ability to make these decisions. Zuck didn't personally raise rents, but he built machinery that did so. But hey, it aiin't his fault his exploding knife-gun hurt someone, he painted it pink!
To give the benefit of the doubt I'm not sure if OP was objecting to the first step of placing a person or company along a good-bad continuum or the second step of reducing that continuum to a binary label.
In either case though, no amount of complexity makes it unreasonable to consider whether a company like Facebook is a net positive or negative force on society -- which is a binary decision that requires reduction.
First, because we can never know everything that is going on outside of investigations like this one and public statements by FB.
Second, language is a lossy codec for thought, so the investigations and public statements by FB are themselves reductive. It's turtles all the way down.
This applies to both examples then: thinking FB is good or bad, and being required to eliminate reduction in order to make that decision.
The issue here is that writing a good account on Napoleon is very difficult. His enemies disseminated nonsense about him, and he was very careful about his public image even as a very young man.
I could get into the history: but Napoleon was fighting feudal leaders who aimed to restore the monarchy in France. They declared war on him many more times than he on them. The Napoleonic Code is one of the most influential documents in history and through it and Napoleon's influence helped spread the ideals of the French Revolution throughout Europe. Of course there are many negative things to say about Napoleon, but he is a very interesting person who deserves a look.
>>I've seen the Are you Safe feature reduce so much stress during disasters.
Not exactly charity, it gets people to use FB more.
FB's problem (for the world) is that it is big and as such it can be used to manipulate opinions or mess with people's psychology. Either by others or by FB for $$$. Same would happen if Fox News or CNN were watched by a billion plus people. They would have the power to move certain people to a different direction. Google is another one, a little bias on search /news results and maybe millions of votes would move one way or another.
Late Edit: I must add to that list admission of ministerial responsibility. The last resignation with honour, rather than for political point scoring, was Lord Carrington resigning as Foreign Secretary in 1982. Since then it's a dead concept.
But does that follow that publishing them is OK too?
From the outside, it looks like these politicians are frustrated that a foreign CEO ignored their demand to appear before them (because he has no legal obligation to do so), and have decided to retaliate by releasing embarrassing private internal documents obtained during an investigation in the hopes that Facebook will be politically and financially damaged.
I get that people hate Facebook, but does that justify any level of bad behavior as long as it harms them?
You can ask for an exception for part of your evidence, if you fully explain why, which the committee considers. The usual reasons for discretion apply. It's almost unheard of for some evidence not to be published at all, though it has happened. 1980s I think was the last case.
No idea what dusty precedent or procedures apply when someone refuses to attend or documents are seized. That doesn't happen much.
Maybe no one asked. Maybe this is the redacted for sensitivity version as we have no idea the amount seized in the first place. I think we'd have to wait for the report to know.
If this is just part of the normal investigative procedure rather than a gross abuse of authority, then so be it.
I’d be pretty upset if Congress subpoenaed user data from Facebook and then selectively published embarrassing info on their political enemies. Even if I hated those people.
I think it's more like this, which if you've been paying attention has been going on for a long time. If you want to bet that the US hasn't snatched data they want when they know someone is in the country even temporarily, I'll take that bet.
Among people who have a problem with the Six4Three situation, I feel like the issue is really just that we found out about it. The powers are already there, waiting to be used.
But I don’t see stories where some congressperson is then publishing some of the fruits of that surveillance or searches just to embarrass political rivals. That would be especially beyond the pale.
Personally, given the level of harm Facebook has helped to inflict on the world these last 2-3 years, yeah, I reckon I'm perfectly happy with people inflicting harm on the corporation, probably even to the point of ruination.
Looking at some of our politicians in the US, I don't really want them to have unfettered power to misuse the law and their elected position to destroy any person or organization who raises their ire. Even if I don't like that person or organization and want to see them destroyed.
Of course these can be changed by Parliament... ;-)
That said, the UK has a very weak conceptualisation of separation of powers, thanks to parliamentary supremacy.
If Facebook were forced to mostly use ephemeral communication that would be somewhat crippling.
Also I think there exist regulation as to what kind of documentation must be archived and how (SOX?).
I'm proud of my parliament today (and it's not often I can say that), even if in other parts they're tearing themselves apart.
Parliament is sovereign. Facebook ignored Parliament. That is tantamount to blowing off an American court order.
More pointedly, Facebook has broken their agreements on keeping WhatsApp and Facebook data separate. These e-mails further show Onavo and Facebook conspiring to hide their intent around data collection users, which likely breaks British privacy and honest trade law.
1. In the US, the president (executive) can declassify any classified information, the DOJ or judiciary may publicize (discovered) internal documents for trials/indictments before guilt is established (note: IANAL). In the UK, Parliament is supreme to the executive and judiciary.
Quick textual analysis: in a pithy 623-word statement, Zuck manages to mention "shady", "sketchy" or "abusive apps" no less than 7 times. 8 times, if you include the time he mentioned Cambridge Analytica without using a sketchy adjective.
Notice the spin as Facebook the White Knight protecting the public from the evils of sketchy apps. Unclear how this will play out given public losing trust in Facebook itself.
1. "some developers built shady apps that abused people's data"
2. "to prevent abusive apps"
3. "a lot of sketchy apps -- like the quiz app that sold data to Cambridge Analytica"
4. "Some of the developers whose sketchy apps were kicked off our platform sued us"
5. "we were focusing on preventing abusive apps"
6. "mentioned above that we blocked a lot of sketchy apps"
7. "We've focused on preventing abusive apps for years"
8. "this was the change required to prevent the situation with Cambridge Analytica"
> there was evidence that Facebook's refusal to share data with some apps caused them to fail
Stuff like this should trigger the EU Commissioner for Competition to withdraw the authorization that “allowed” FB to acquire Whatsapp and should force a split between the two entities. A fine (no matter how big) will be seen by FB and its investors just as “cost of doing business”. Facebook in its current form needs to be split up back again.
> Facebook used data provided by the Israeli analytics firm Onavo to determine which other mobile apps were being downloaded and used by the public. It then used this knowledge to decide which apps to acquire or otherwise treat as a threat
Checking out your competition is pretty standard among all businesses, as is buying out the ones you can't beat.
Sharing or not sharing data with another app is likewise not an unusual decision. FB is not a public utility - they can not work with other apps for any reason, or no reason at all. And this is a particularly ironic thing to point out, considering that the main thrust is complaining that they _did_ share data with other apps. So it's bad if they do share data, but also bad if they don't?
To me, this would be Facebook taking advantage of their stronghold on the market to continue to dominate the market.
Let's remember this is not Facebook's data, this is their users' data. It should be up to the User who they can share their data with, but they weren't given the choice to use their data with an app because Facebook decided they didn't like the way the other business competes with them in some aspect or how they aren't making money out of the third party app. An example is Facebook won't grant Influencer Marketing Companies access to their API even though the user (The Influencer) wants them to have the data so they can get paid based on the views, shares, comments, and likes of the posts they've been contracted for. User has a legit desire to share that data to the point where they will take screenshots, GDPR deletions, etc to fulfil the need for that data to be shared. Facebook's reason for not allowing them access is simple, they don't make any money out of the deal.
So if they refuse to allow you to share your data with people you want to share it with, that is bad. Sharing your data with companies you don't want to share with is also bad.
No, it's not - short of a change in law it is Facebook's data. Facebook collected, maintained , and analyzed the data. It's their data, the fact that their users allowed Facebook to collect it does not change that fact.
GDPR allows users to view the data Facebook collected about them. If users want manually request their data from Facebook and provide it to competing apps they can. But this is the user's prerogative, not Facebook's.
Actually, for a lot of it they can't. The GDPR downloads are incomplete and are missing lots of data like many other GDPR downloads.
And in some cases these aren't competing, they just aren't making Facebook money so Facebook denies them access and tells them what they need to do to add Facebook to process so they can get access.
And if I can tell Facebook to delete something and they have to do it, they do not own it. Facebook are not in control of the data, they have access to it, but they can't share it without permission, they can't sell it without permission, they can't do alot of things. This, to me, means they do not own the data.
Facebook continues to play a game of "how much can we mislead them to get our way".
The ability to tell Facebook to delete your data does not grant you ownership of that data. You do not have the right to tell Facebook to let some 3rd party to access it using Facebook's infrastructure (which seemed to be the claim made by the original comment I responded to). The fact that the government forces companies to abide by certain regulations does not mean that the company is not the owner of the things that are regulated. A company might have it's interview records audited to investigate discrimination. Does that mean the company does not actually own or possess these interview records?
How is it a violation of privacy to tell a user with which other users they have communicated in the past?
It's not just telling the user. Once they info is published said user can turn around and tell it to the rest of the world. Some people have already published their requested data, it seems unlikely that people publishing said data face consequences.
Sure, nothing is stopping people from scrolling through chat logs screenshotting them. But that's difficult to do at scale, so there's de-facto limits on how much info can leak this way. Also consider the situation in which you unfriend someone, thus preventing them from seeing past chat logs - I think, I haven't used Facebook in a long time. If that person does a GDPR request should the company deliver data that the user is otherwise prohibited from viewing? If GDPR does mandate this it seems like a legally-mandated side channel attack.
> No, it's not - short of a change in law it is Facebook's data.
Er, actually it is. Under GDPR, and under previous data protection rules that lacked meaningful enforcement, users own their data. Companies at most get to "borrow" and/or keep it safe for them.
Alas, this fundamental fact seems to not have quite made it everywhere yet..
We know that FB makes connections between you and other users even if you've never made those connections yourself. We've heard about the shadow profiles, etc. These are all data points that FB generated on their own. It seems like this is the information that is valuable. Information gleaned from ML analysis of images posted would also be their own. That has way more value than the image itself would. FB is doing way more under the hood than just slurping in the data users posted, and then displaying it back to them.
One could also argue that the User already has all their own data, they just didn't collect it very well. So really they're trying to use FB as their data collection platform - which FB is free to do or not, as they choose.
This seems so ludicrously backwards to me. How is my decision to upload my data to Facebook a meaningful claim that they should run their platform in a particular way for my convenience, even if there's no business justification?
Allow me a tortured metaphor :)
Imagine you own a very popular art museum and you let any artist display their work there for free. Visiting requires a free membership, but visitors must apply for it. Competing museums and galleries are always trying to get a membership so they can come through and scope out which artists are most popular and then woo them away. You reject those memberships.
How on earth is this unfair to anyone?
Now, letting those competing museums in is obviously better for the artist, but does that give them any right to demand it? You're providing this space for them for free! You're providing a huge audience to them, for free! What possible justification do they have for demanding that you let your competition come strolling through and hurt your business just so it'll make their life a little better? If you don't like it, you can pull your work from the museum at any time.
So many of the critiques of companies like Facebook and Apple come down to people wanting all the benefits that these platforms provide, but without any terms, inconvenience, restrictions, or tradeoffs. Ridiculous.
Popular art museums generally run with allowing entry without requiring membership. They make up for that by collecting donations or entry fee, running a comparatively expensive cafe, and ensuring everyone leaves via the gift shop. They might heavily promote, and employ people to individually sell memberships, or give free membership in exchange for monthly gift shop and offers spam. One or two might be a little too heavy handed in promoting membership over visit.
They don't plant a bug in your pocket without knowledge, or consent to learn what other art museums and galleries you are visiting to assess if they are a worthy target of restriction or takeover. They don't insist on knowing your entire contact list and SMS history just to look at the pictures. They don't ban employees of other galleries and museums from visiting. Not least because 9 times out of 10 they would not know if they worked for a different city's gallery.
That's enough torturing. ;)
One of the above is a fair exchange, that can be freely and knowingly chosen. The other is using undisclosed and underhand methods to get extra leverage. That, by definition, cannot be fair. Turns out it's illegal too.
This is one of the very rare cases where I wish the UK had a little more of the US's litigious culture. In my 50 odd years it's the first and only case. Normally I wish for the reverse. :)
How much would Facebook cost if users has to pay cash? I think the Economist did a survey on this topic with Google search and the average price people were willing to pay was $1,500 a year. If Google switched from monetizing user data and advertising to charging money, how much human productivity would be lost because some people can't afford a good search engine? How much would this exacerbate academic difficulties of poorer student if we add the fact that their classmates can search online information more effectively because their parents can afford Google search?
The lack of tangibility of paying for products with personal data instead of money can be irksome, but it has created an unprecedented ability to build large, complex services while delivering them free of (monetary) cost to the end user.
That aside - FB never gave users that choice. Where many media services - including AOL, streaming media companies, and newspapers (and the Economist) survive by charging a subscription, FB has never attempted to use that business model.
Why? Because FB has always been a data harvesting and user monitoring/profiling operation that happens to operate a social media front, rather than vice versa.
Ditto Google for search.
And "telling users about it in some capacity" is very different to giving users a list of buyers and full details describing what their personal data was used for.
Realistically, no one outside the industry - and not that many people in the industry - understand where this data ends up and what it's used for.
Pretending otherwise is casuistry and special pleading. There is no way users can estimate the true value of their data, either individually or in aggregate - because the value is determined by buyers who remain hidden and unnamed, and neither FB nor the buyers are obliged to explain any part of the process.
There is no informed consent here. It's a perfect corporate asymmetry, and very much designed that way.
Also, the value of this information isn't secret: Google and Facebook are public companies and publish their revenue numbers, no? It's not hard to divide by the number of active users to get an average value per user.
I'm not defending all of Facebook's practices. In particular, any surreptitious attempt to collect sensitive user data without permission, or in violation of permission, is terrible. We might differ on what constitutes "sensitive user data" or "permission" but whatever.
Regardless, that's not what we're actually talking about here, and if the museum in question did ask for user consent to a full cavity search before going through, I don't think that actually changes the calculus of what's fair for them to provide to their competitors.
Look at the T&Cs sometime. Once you upload something, it's theirs. Once they sniff something off your phone, it's theirs. Once you look at something in another tab while FB is open, it's theirs.
Because they’ve been so careful about following other laws?
According to Facebook TOS, you own the data but grant them an open-ended licence to use said data.
[the following paragraph does not assert that FB is a monopoly, it is agnostic]
The law says that a company with a monopoly needs to act a bit like a public utility. Recall what Microsoft did to Netscape -- was there anything wrong with that? Is a company allowed to take action against a competitor? The normal answer is "Yes". But different rules apply if the government can prove that a company has monopoly power in some market. Actions that are allowed among normal competitors are no longer allowed once a company has near monopoly powers. I think fast growing tech companies get into trouble because the founders doesn't realize how much the company has grown. Bill Gates could use aggressive tactics during the 1980s because Microsoft was still small. He got into trouble in 1994 because he didn't realize how much Microsoft had grown, and how much the rest of the world suddenly regarded it as a behemoth. Therefore he was no longer allowed to pursue the aggressive tactics that he'd previously been free to use.
Arguably, the same reality has now caught up with FaceBook.
Facebook is unusually large and lawless. Their favoring industry incumbents decreases the economy’s dynamism. If you want to compete with Airbnb, now, you must not only launch a better product. You must also curry favor with Facebook. That’s a lot of trust for society to put in a company with a track record of terrible judgement. (This is also why we regulate monopolies.)
I think this point is, at least partially, arguable. Take "political media." FB is so dominant and powerful in this arena that excluding individuals or parties is very close to denying people their rights to free speech and/or associon.
Idk if you can make that case about apps, because their app platform is not that important, but you could make it about Android or ios... maybe Aws.
To be clear-- they marketed Onava as a VPN service to protect your personal data from spying and then they spied on your personal data to figure out what products you were using.
Yeah. I'm guessing they have a bad business model.
The selective access to competing apps, based solely on this information, is antitrust though. If the apps violate rules that apply to everyone, that's different, but that doesn't appear to be the case?
Edit: turns out Onovo had access to specific, proprietary data, acquired through it's own shitty apps, which facebook acquired. This is pretty shady.
Facebook bought Onavo in 2013 .
Yes. Onavo built “consumer-facing apps to help optimise device and app performance and battery life on iOS and Android devices”  while piping their users’ data to Facebook.
If you build an app that relies solely on Facebook login and access to Facebook data as key parts of your business model then are you really a competitor?
There's a huge difference between "build your own OS" and "build your own website login".
It's not a very tall order to create a social website that doesn't use FB login. Social websites should not by default have a right to all of facebook's users. They should have to build that user base themselves.
Your first point and most of the others others may or may not be illegal. If they are legal, what the MPs need to do is their jobs.. make laws. But, most are not directly related to company size and/or market share.
Your second point does sound like a trust and I think you're right to link this to the WhatsApp sale.
But... Regardless of the FB/WhatsApp conclusion, that is a one-off that won't lead to much systemic change whether it's a fine or an order to de-merge.
What we need (imo) is a whole new approach to antitrust that doesn't hinge on a definition of monopoly.
Companies beyond a certain size should just be put under a different set of obligations than smaller ones. They should be assumed to have market power by merit of their size.
When it comes to gdpr and such, these need to be written differently for large companies. Basically, no more equality before the law for companies. What we get in return is rule of law.
Other factors matter far more than size.
In any case, half my point is that pricing power is not the operative definition of market power anymore. If xonmobile can/do have all sorts of influence on governments, job markets and such. They can do tax stuff a restraint can't.
What I am proposing is that the definition of a monopoly is not useable, for policy. It's fine as an economic & academic concept, not as a legal one.
Market fundamentalists are so used to advancing arguments like this without being challenged that they have become almost completely divorced from the scientific method entirely.
Maybe being forced to share this kind of proprietary data, in this specific type of circumstance, will actually promote a more diverse marketplace with higher innovation. Maybe it won't.
Either way you don't just get to assert it.
And given that the person you're responding to is stating something that most economists would agree with, I think the impetus is on you to tear it down with "the scientific method" if you think the prevailing wisdom is wrong.
TLDR: don't provide proof for flat-earthers.
If you were in fact an economist yourself, perhaps you would have recognized market fundamentalism as a term with a specific meaning, often used by economists:
> It's impossible to have a discussion about anything if some things are not taken as axiomatic.
You mean you personally find it impossible to have a discussion with someone who disagrees with you on the axioms of economics because it means you can't frame the argument in your own terms, which would be that there is no alternative to what you are espousing.
My only point was that you may have better success railing against the conventional wisdom by providing an actual argument that it's wrong, not by ranting and raving about the existence of prevailing wisdom itself, and demanding that everyone provide a detailed proof every time they reference it.
You should check out the works of Milton Friedman, you might find them interesting!
Think about how much data Amazon has about all the suppliers and products they've sold over the last twenty years. Should they be forced to hand it over to anyone who wants it just because of the fact that its size and exclusivity makes it very valuable?
What about the governments databases on everything under the sun? Should all of that be public?
Maybe the answer to the above questions is yes, I don't know, but I don't think it's axiomatic.
I think the issue I have is with the idea that Facebook is successful, and therefore can no longer prevent their competitors from blatantly taking advantage of them. But I'm not sure where to draw the line.
If Visa and Mastercard decided for whatever reason that they wanted to kill some small banking chain and not allow payments for them or their customers to go through their network, that would be bad. But I don't think that means that they have to let an upstart competitor credit card startup have access to the huge multi-billion-dollar network they've built up, does it? Even if they might extend that network to other entities that don't compete with them?
In nature, random mutation leads to greater genetic diversity and speciation. In some mathematical optimizations, introducing random noise can help avoid getting stuck in local maxima.
Maybe economic growth, but how is innovation promoted by limiting access?
It also has the same limitations:
If there's no competitive advantage to having proprietary data, then what's the point of developing the tech and services to try and obtain it?
There wouldn't be any point, and it would be totally fine.
Your phrasing obviously comes with the interpretation that if a split-up would bankrupt a part, it shouldn't be done.
Why should this be taken into consideration?
Secondly. It has value.
A different party could invest and keep it going.
Why is it important that it sustains itself?
Of course, if you just wanted to punish facebook by destroying shareholder value, or you're certain WhatsApp users will move to an FB competitor rather than to FB itself, you might not care if WhatsApp stays in business post-breakup.
My impression of the situation is that Facebook bought WhatsApp for its data and user-base; not because it was competition.
What _will_ happen is that other free services will pop up and be used, and some users will migrate to those. Which is a good thing. We'll have actual competition on the field instead of a single dominant player.
That the Signal Foundation is now basically funded by interest earned on money made from that very WhatsApp sale is quite poetic.
If you want make Farmville with FB login go ahead, but if you try to make "FB 2.0" with FB Login and all friend connections preserved but keep all the ad revenue yourself obviously Facebook is gonna put their foot down.
$200 billion, and a review in 12 months time?
That's 5 years revenue. $100 per user, sounds reasonable. Facebook can put in an easy "claim your $100" button on their site. Would that just be a "cost of doing business"?
Depends how big you think. FB had revenues of $40bil. So let's fine them $50 billion.
It's not clear what this means. Is it an API that returns '403 forbideen' or more of a case of lack of competitive information
> Yes, within a day of Vine’s launch, Facebook pulled the ability for Vine users to find Facebook friends who were also using the new app.