The irony is that the media is doing this now, with respect to anything related to Facebook. Facebook is a website with 99.9% benign user generated content. The news media has been spreading outrage and paranoia and trying to get 'engagement' since before Facebook was born.
If only 0.01% of toxic content by terms of volume is receiving disproportionate attention and focus, not only permissible but actively encouraged by complex opaque algorithms, then it is certainly a much bigger problem then it's meagre character count would otherwise suggest.
To me, this needs to be split into two use cases. The fact that .01% of content gets viewed a lot more than average is not a bad thing. Much of user content is junk with a heavy dose of ads for a seasoning. It is perfectly reasonable that original or thoughtful posts, even if unpleasant or provocative, get much higher traffic. The problem is the algorithmic reinforcement that encourages those making short-term sensationalist statements.
Let's still allow folks to voice their opinions, whichever they are, and advertise them by the digital age equivalents to "word of mouth". Just keep it human, not algorithm driven. My 2c.
I think we can all agree that quantity isn't the right metric for this - whether that's number of posts or amount of traffic on a given post. What we're really trying to do is gauge what the conversation is.
Having said that, there are clear distinguishable problems we can point to that have an interesting quantifier. Anonymous 'Campaign' groups run by individuals that have previously been associated with Brexit campaigning are currently spending millions on Brexit advertising. There is no indication of where the money comes from to pay for these (we know it's not from the people who are associated with the page because they're just staffers).
In that way I agree with you - the thing to worry about is where the ad dollars are spent, but I don't think that's the only thing we should be worrying about.
The 1% Rule says that in any online community, 1% of participants create content while 99% lurk .
I'll propose Panarky's Corollary that 90% of the crap is produced by 0.1% of the participants and then shared, liked and amplified by 90% of participants.
These 0.1% pathological participants are corporations advertising, governments propagandizing, zealots proselytizing, and trolls antagonizing.
Facebook can see these patterns better than anyone, and they could eliminate or de-rank the material from the 0.1% if they wanted to.
But the 0.1% is disproportionately incendiary and therefore profitable, so Facebook has a very strong incentive to look the other way.
My presumption is that by far the single biggest signal on whether to display a post on the FB feed to more people is how many people like and share. That's a people problem at the core, not an algorithm problem.
It's still an algorithm problem at its core, because the algorithm is what determines what the "single biggest signal" it will use for amplification is. There's no reason why maximum shareability should be the main factor, it could be something else like some proxy for erudition, for instance.
However, this shows that the root problem really is human. You can tell because you are suggesting designing an algorithm to fix a pre-existing problem. Without algorithms of any sort, this problem is there.
I don’t think it can be solved by algorithm.
Simply: it’s all content and it’s impact depends on the people who take the information in and then apply their beliefs and understanding to it.
Some content is simple - recipes for example.
Others are impossible to parse: “they want to reduce hate speech” translates as “they want to remove conservative/right wing speech” (this actually happened in a conversation yesterday. Suggesting that the term hate speech is being transformed/co opted)
Engagement in contrast, is simple, reliable and measurable.
I don't think that's true, it's just that the proxies are imperfect. But the specific alternative metric wasn't that important to my point.
> Engagement in contrast, is simple, reliable and measurable.
That's the root of the problem: the algorithms have been designed to use "simple, reliable, and [easily] measurable" things as their whole input, and then narrowly A/B tested to maximize some other easily measurable output. Looked at holistically, it's too simple to do an acceptable job.
If you took a similar attitude, and applied it to create an algorithm for another discipline, like medicine for instance, it'd be pretty clear it'd be a disaster.
Without those spontaneous user-composed opinions and anecdotes, the platform would quickly become (or perhaps already has become) plastic and artificial. As anyone having to relate information through a bureaucratic communications department has certainly experienced, even the most unique, interesting and useful story in the world stands little chance against an eager team of copy editors and PR representatives.
That's not my starting assumption.
I think Facebook has a problem that its typical so-called "benign" users are creating less and less content, while malignant content-makers are not discouraged.
Not really. Facebook put it there for me. Time and again it shows me items like "[Friend] replied to a comment on [toxic thing]", where the thing gets a prominent preview etc. I haven't seen a real update from [Friend] in my feed for months, yet there are plenty when I visit their profile. Even if the friend is fighting the good fight and has commented to correct some misconceptions, it's clear that this particular action was selected for my feed because it's related to a controversial item.
If it weren't for comments from non-friends, Facebook would have no toxic content for me at all.
Facebook was great until it started inserting posts from people I didn't explicit whitelist as friends, and I'm including news articles in that category.
You mean like the rumour-mongering, low-effort politics etc. in your standard tabloid?
The whole point of tolerance is to say "I don't agree with that, but I respect your right to do your own thing."
This is advocating the exact opposite. Under this philosophy I can not longer be tolerant, because anything I "permit" I will be accused of promoting.
This seems like a recipe for a horrible, horrible world.
There's certainly a lot of irony in this article lambasting Facebook for spreading outrage culture, itself written in such a vitriolic tone
This is an excellent point, and I'm a little embarrassed that I didn't catch it myself.
Publishing used to be hugely profitable because you needed a printing press or a big radio tower, which cost a million dollars, which made it rare and therefore valuable. If you wanted to advertise there weren't that many publishers to choose from in a given market, so they got to charge high rates.
Then the internet smashed the economics of that to smithereens. The New York Times now has to compete with your blog. And your blog doesn't have that much traffic, but the sum total of all blogs and social media actually does, so the number of ad dollars going to traditional publishers has gone way down.
People always point to Google because they sell advertising and get their cut when they do. But the thing Google replaced isn't publishers, it's ad sales reps, and nobody weeps for them. The "loss" isn't the commission going to Google instead of some ad rep, it's the tiny amounts each going to millions of bloggers which used to get summed up and pay the salaries of thousands of reporters.
Which is why the publishers are willing to resort to underhanded dirty tricks as they did to get the EU copyright directive -- because their goal is to shut down all of this user generated content which competes with them for lucrative eyeballs.
You could have stopped at the comma. How many people out there regularly watch Fox News and MSNBC? Or read The Rolling Stone and the WSJ? We've had partisan bubbles for as long as we've had any sort of mass media.
As background, I've spent years writing on social media (and used to work in it):
My Media literacy guide:
A piece I did for Quartz on Selective Facts:
And my data analysis of the NY Times (with an example in social media):
This I disagree with. It's bad, but nowhere near as bad as it was 50 or 100 years ago.
It used to be that most cities had individual newspapers aligned with each party, and sometimes with factions within that party. Those newspapers had to moderate themselves when economic pressures forced mergers, which is why so many newspapers have hyphenated names.
For example, at one time the Los Angeles Times was so staunchly pro-Republican that it didn't even report the results of Democratic primaries or candidates.
Today we're quite homogenous by comparison. People on both sides of the aisle believe more-or-less the same things, but we zoom in on the tiny differences and exaggerate them. The media does its best to facilitate this, because its business model depends on eyeballs, and eyeballs are driven by novelty and controversy.
I definitely think it's true that mass media has become more partisan. But, as you note, it's not coming from social media. It's coming from the cost of delivering content being driven to practically zero while simultaneously being able to reach nearly everyone in the world.
Facebook, Twitter, and Reddit do.
That's the fundamental difference. You can get a biased, partisan opinion from whichever tabloid you read... But that tabloid doesn't make it trivial to discover that there's two dozen like-minded lunatics, that you can get together with, to get validation for your crazy ideas.
Reading, or having a subscription to the Wall Street Journal doesn't give me membership in the political club it's targeted at, or a hotline to a dozen industrialists. Reading something shared by the Flat Earth Society of Truth on Facebook does, because the link to their Facebook group is right there, and they are looking for members.
Also the choices sites make based on your profile sometimes offend or puzzle people as they can seem personal or even offensive. For instance if someone gets ads or suggested content for acne control or weight loss, it could be seen as hurtful or offensive.
What i think sucks the most is that if you silence this nonsense the result isn't that they quietly retreat away but they weaponize the silencing as censorship and it they use that to boost their signal and people see/hear the controversy and instead of letting it fade away like it should that controversy is boosted on social media and it gives them some appearance of validity.
same playbook fascists use
In other contexts, it's understood that security by obscurity is bad.
> Fox News exacerbates poisonous politics by creating filter bubbles of like-minded partisans, spreading hoaxes and inaccuracies, inducing anxiety and paranoia, rewarding clickbait and outrage, and so on
> CNN exacerbates poisonous politics by creating filter bubbles of like-minded partisans, spreading inaccuracies, inducing anxiety and paranoia, rewarding clickbait and outrage, and so on
(removed hoaxes since I couldn't find much in that line, but feel welcome to correct me )
> Big media companies with political interests exacerbate poisonous politics by creating filter bubbles of like-minded partisans, spreading hoaxes and inaccuracies, inducing anxiety and paranoia, rewarding clickbait and outrage, and so on
I also don't know of any other media property that has Facebook's "circulation"; Facebook has around 2.3 billion "subscribers".
Nor is FB, in the context of this discussion. Those 2.3 billion people are not 'subscribers', they are content creators and distributors.
It's not like Zuck and his staff personally come up with nefarious stories to put on FB and indoctrinate the masses.
One way to read the new's industry's issues with Facebook is that, actually, yes they did, and they're trying to protect the turf that Facebook is encroaching on.
The media has an incredible power to set the agenda, in both a positive (choosing to cover) and negative (eliminating a story by simply ignoring it to death) sense, and it uses it to an extent that can only be called "routine".
> The company is the largest television station operator in the United States by number of stations, and largest by total coverage; owning or operating a total of 193 stations across the country in over 100 markets (covering 40% of American households)...
> Sinclair's stations have been known for featuring news content and programming that promote conservative political positions, and have been involved in various controversies surrounding politically-motivated programming decisions
I think we, as users, bear a lot of responsibility for the content we create and if there's a lot of awful stuff out there... well, we created it. But I think someone deciding to make a bunch of money in concentrating and distributing that awfulness deserves a lot of scrutiny.
I don't think there's evidence of that right now with Facebook.
At what point do we accept that polarization (in a tribal sense) is a featurebug in human psychology? Maybe fb is just giving us what we want.
Eh, it's closer to that then you might think if you haven't seen the chart of who owns what, and how few "owners" there actually are. It's not technically a monopoly, but a reasonable case can be made it's an oligarchy.
But no, it's not literally a single entity.
(I don't necessarily endorse this chart fully, you can search yourself on "who owns media companies", but the info is sometimes out of date as they buy and sell things. But it at least generally looks like https://www.webfx.com/data/the-6-companies-that-own-almost-a... )
In fact, it hasn't been for a while.
Authentic user identification ceased to matter, when two things happened. 1. facebook opened its doors to non-collegiate email address account creation (a throttle that actually regulated real names and authentic identity). 2. major news outlets started pushing facebook as part of "The Blogosphere" which offered a fundamental signal to TV land, that viewers could find the next AOL on facebook.
After that, user head count became smoke and mirrors, and a lie. Sure, you can gain an understanding of how many people know who Mario and Luigi are, by how many Nintendo consoles are sold, but that's not what matters, when your making movies, holding conventions, and you've got business agreements cemented with global news outlets, for perpetual free publicity, for as long as news gets pushed through channels like CNN and Fox News. There's enough brainwashing power in that, such that people feel safe to dump casual banter and get sloppy with conversations inside facebook, revealing locations and insight to give a glimpse of a cut of social activity for the safe-for-work crowd.
And that's what Facebook is up to by now. It's just forum software, to replace old school bulletin boards, and offer impromptu mix of craigslist style performance art and classifieds, that maybe grandma might see.
It's not just the fraction of "bad" content in your system, it's also the level of propagation. Most of the "99.9% benign" content will not draw a lot of eyes per piece of content, while I'd guess "bad" content will draw a lot more eyes, and ergo have a bigger impact.
I made the opposite argument in the past, actually. I am a proponent of reclassifying facebook etc as common carriers/utilities and forcing them not to censor any content unless it's actually illegal content. Tho, that does not and can not mean they have an obligation to promote such content e.g. through search results.
However, words do have impact, and may be used as a weapon to incite the readers to certain actions. This can be actions I like, or actions that I find abhorrent.
Then again, the danger that censorship poses to a free society, and the danger of unfettered censorship by private parties controlling major chunks of the "public marketplace of ideas" in particular, outweighs the dangers of bad people publishing legal but reprehensible speech, by a lot. The censorship we allow has to be minimal, and driven by societal consensus or compromise, and right now the state, with a separation of power and checks and balances, while not perfect, is still the far better alternative than letting a Zuckerberg or a Brin or a Dorsey or a Huffman unilaterally decide what's OK and what gets banned.
News channels make the extremely rare seem common, and the common seem like it doesn't exist. All to drive ad revenue. Now traditional news is rapidly becoming irrelevant and we see pay walls popping up left and right. I would argue that the news media is worse at propagating extreme fear and paranoia, than Facebook. At least a lot of Facebook content is funny/viral videos, whereas the news is 90+% negative
There's a large amount of "media" folks who at worst title an article in a way that catch's eyeballs, compared to those actually pushing straight bullshit.
Anyone selling BS to people deserves scorn, that includes Facebook.
Consider that algorithmic trading in stock markets is insanely noisy, and each trade comes with a fee. Many trades are executed as pricing probes, or efforts to exploit a minute shift in a given price. Assume facebook traffic has similar motives, just to suck impressions into a viral current for any particular campaign of appeal, or at least convince clientelle that advertising with a PR firm actually works. Gaining available eyeballs results in fractions of a penny in certain "users" pockets.
That's speculation, though. There's not enough information publicly available to know for sure.
99.5% of Boing 737 MAX never crashed. That doesn't mean that 0.5% crashing isn't a systematic problem that is newsworthy and demands action.
That is exactly right. I have no idea what "platform" means, but I know what "website" means. (Or "endpoint".)
Facebook was not always in the "news" business. The original website in fact had no "news" feed and initially users complained when they added this "feature".
The addition of "news" of the kind produced by journalists was added as part of this "feature" much, much later.
Facebook and the news media both sell ad space, but no one stores their personal photos and correspondence with the news media. If Facebook is, as Zuckerberg would claim, not a news organization, then Facebook cannot claim to have made any positive improvement on the quality of journalism. It stands to reason aside from "convenience" there is no benefit to getting news via Facebook's website versus the website of the news organization that produces it. To the extent Facebook manipulates the news that people recieve via the Facebook website, there could be negative effects.
AFAIK, the news media do not create online versions of their newspapers that (outside the control of the reader) automatically change what stories are presented based on the reader's identity. Though with the influence of Facebook's methods, it would not surprise me to hear that this was happening.
Whatever the case, certainly there must be a journalistic ethical principle that would point away from algorithmically manipulating which news reports are delivered to each reader, even if newspapers know the identity of each reader and what each reader has read in the past. "News", as in "newspapers", is the antithesis of controlled communications where not everyone recieves the same information, e.g., within a corporation.
Perhaps an analogy would be a library that knows the identity of its patrons and the items they have checked out in the past. It might be able to make predictions and manipulate which items each patron was exposed to when they visited the library, but it would seem against the fundamental purpose of a library to do such a thing.
The question to ask is what place does "news" (as in "newspaper" not updates on others' Facebook pages) have amongst people's personal photos and correspondence, the "99.9% benign user generated content"?
Why is it there?
Here is some background reading.
Facebook begins announcing major changes, aimed to ensure that time on the platform will be "time well spent."
In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn't Facebook, it was Twitter. The latter's 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook's. "Twitter was this massive, massive threat," says a former Facebook executive heavily involved in the decisionmaking at the time.
So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook's News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines."
That's your slanted viewpoint based on what you've been hearing in the news media. Why is he a "very bad" person? Why are most of the things that he makes "very bad"? There's been a spate of bad press about Facebook recently, so that's primarily what's been coloring your view.
For people in other bubbles, they might believe that "Democrats are very bad people making very bad policy most of the time", or "Muslims are very bad people doing very bad things most of the time". Gross simplifications are easy to adopt, hard to abandon.
Thanks for reading my mind, knowing me better than I could ever know myself.
>Why is he a "very bad" person?
I based my opinion mostly on the stuff I saw and keep seeing from him directly.
>Why are most of the things that he makes "very bad"?
Well, I came to the conclusion (and not just me) that his stuff is designed to be as addictive as possible and as invasive as possible. What he creates, deliberately, is the modern equivalent of cigarettes. Bad attributes in my book. His products are, in my humble opinion, a major contributing factor in the polarization of society and the decrease in civility and political culture all around the world (and no, I am not referring to Russians abusing facebook, I mean facebook itself, which attempts to increase addictiveness by highlighting extremes and conflict as a matter of "entertainment").
On top of that, he and his companies are using their power to lobby and push for stuff they full well know is detrimental to society but helpful to facebook, if it is regulation that is designed to strengthen facebook through regulatory capture, or if it is to bring "zero rated" facebook to developing countries in an effort to stomp out local competition before it even exists; Noblesse oblige.
And then there is the constant torrent of news about facebook's other indiscretions, e.g. regarding keeping the data they siphoned off secure, e.g. misusing data gathered under false pretenses like 2fa phone numbers, or e.g. how they treat the poor souls who they hired for pennies in Manila to do their content moderation, etc.
>There's been a spate of bad press about Facebook recently, so that's primarily what's been coloring your view.
I don't give much attention to the attention grabbing op-eds and pundits and sensationalized reporting. E.g. I think the entire Russia stuff when it comes to facebook is overblown, by a lot. But again, thanks for reading my mind.
By and large, I find CNN reporting despicable, MSNBC reporting even more so, but the uncrowned king of despise still remains Fox News, when it comes to TV networks.
>Democrats are very bad people making very bad policy most of the time
Since you bring it up, I believe that Democrats are mostly good people making bad policy most of the time. And that Republicans are, well... kinda split, with some real bad (and loud!) people and some good people with good intensions, and that Republicans make bad policy most of the time.
 e.g. https://www.businessinsider.com/embarrassing-and-damaging-zu... or e.g. the bullshit stuff he said in front of congress
Is there a term for when journalists forgo making an argument about why something is bad in favor of just making a bunch of assertions + some negative sentiment words? Why is deciding what speech / UGC should be allowed in social media a 'corporate responsibility' such that letting the government or some 'impartial' NGO arbitrate this a bad thing? Why would you want every platform to have it's own content policy? Why do you want corporations to 'exercise judgement' and to be accountable (to the state? to journalists?) if you also don't want there to be some formal guidelines about what is or isn't allowed on social media?
This author is stating that Facebook should make it's own decisions about harmful content, and that facebook should ALSO be accountable to some other entity for those decisions (presumably the government, but who fucking knows based on the content of the article...). While happily saying that creating any sort of governmental guidance on that issue is "a legal and ethical minefield".
That's bullshit. You can't have your cake and eat it too. Either create guidelines that steer company decisions here and apply them, or accept that companies will make decisions that do not match the author's idea of morality.
You cannot judge a company without setting standards. You cannot let a company judge itself and expect them to make the decisions you think are correct.
So I think maybe what they're saying here is that Facebook needs to exercise judgement and be accountable to the market.
I don't think I really agree with them, but it's a coherent, internally consistent point, and on-brand for Bloomberg.
- There was a violet attack in Mexico against a journalist by some drug cartel and a random body part (leg or an arm or a head or whatever) was chopped off and posted on social media by the cartel. The people protesting cartel violence picked this image up and were using this image as a part of their protest. Facebook's censors allowed it until the image popped up on some school kid's feed in America / England and all sorts of outcry later, it was removed from the site as a violet image.
- The boston marathon bombing happened and gruesome images of people lying on the floor with limbs strewn about started getting shared on the site. Facebook's policy at this point specifically said no violent images on the site so they got blocked by the censors. Media picked this up and raised an outcry on how facebook was censoring these images and basically someone high up said 'leave it up or else' and the images were allowed on the site.
This is clearly hipocrisy and there's no right answer here. Traumatizing school children with violent imagery they didn't even want to see is not great while the Boston marathon bombing was a significant political event in America and those images didn't deserve to get silenced.
This ends up being a question of free-speech and what sorts of content Americans were okay with. There are no right answers and I believe the govt. definitely should step in and provide guidelines here.
The disingenuous thing is using this as an example of how Facebook is okay with govt regulation while resisting any regulation around things that can hurt them like any number of their privacy mishaps, shady 3p data markets and ad tech in general.
I think I recall one them pointing out that a lot of challenges came from Facebook trying to be everything for everybody.
Moderating content so people don't get pictures of Boston Marathon gore next to their grandkids' photos is problem they made for themselves.
There is a lot of hypocrisy going on, but this doesn't mean there is no right answer or that the right answer is to invite government fucking censors to control everything.
There are pretty obvious things platforms can do:
1. Instead of opaque algorithms, give people control over what they see.
2. Either do bare legal minimum of moderation or create clear, exhaustive, stable and unambiguous rules for which content is and is not allowed on the platform. Once the rules are set, take a stand and abide by them in all cases.
Because there is far more freedom to reject a corporation than the government (the NGO would depend upon how it interacts with the government). NGOs and governments are susceptible to bias, and while they may be less susceptible than a private corporation, they have far stronger enforcement of what ever bias they do have (once again assuming the NGOs are somehow working with government, if not then this only applies to government).
Take for example someone determining what is allowed to be printed in a book. Let's go back about two or three decades, so during the timeframe when most of us were alive even if not at all aware of politics, when social norms and even laws enforced views that we would find objectionable today.
Now, consider two possibilities. One, corporations decides what isn't allowed and thus chooses to ban any pro-LGBT content. Two, government decides what isn't allowed and thus chooses to ban any pro-LGBT content. Which is better?
In my mind, the corporate ban is far better because it is much easier for other smaller corporations to be started that will publish pro-LGBT content. As social norms change, corporate enforcement of morals give rise to more freedom.
Perhaps an even better example that is still relevant today instead of a few decades ago: according to the federal government marijuana has no medical benefits.
I'd rather lawmakers not be making any more laws that further restrict my freedoms.
Government arbitrating speech is what First Amendment explicitly bans. And I think there's a reason why it's the First - if the government controls the speech, then it controls election of the next government, and so on - so it pretty much ends the democracy or at least severely limits it.
NGOs can already arbitrate anything they want - as long as it is voluntary to listen to them. There are many NGOs that already issue their opinions about speech. As long as it is not mandatory - it's fine and already exists. As soon as it starts to be mandatory - see above.
> Why do you want corporations to 'exercise judgement' and to be accountable (to the state? to journalists?) if you also don't want there to be some formal guidelines about what is or isn't allowed on social media?
Corporations already exercise judgement and always had. But so far there was freedom for every corporation to exercise whatever judgement they have, either good or astonishingly bad (as in case of Facebook) and for the users to choose if they want to deal with this corporation or another one. Introduction of the government in the equation kills it.
> there to be some formal guidelines about what is or isn't allowed on social media?
In the US, one of the basics of limitations put by the people on the government is that government, outside of the very narrow task of preventing imminent crime, does not get to say what is or isn't allowed to be said.
> Start with harmful content, which Zuckerberg defines as “terrorist propaganda, hate speech and more.” Facebook would prefer that someone else decide what constitutes such content and should thus be taken down.
I agree with this. Having a private corporation that's invested in engagement, which is also headquartered in the US (very far geographically and culturally from some of the places it moderates) be in charge of defining what's right and what's wrong, is and has been a recipe for disaster— let alone the implications it could have around the sovereignty of nations.
> But it’s hard to see how this would benefit anyone but Facebook.
Excuse me, what?
> Inviting the government to arbitrate what qualifies as “harmful” speech is a legal and ethical minefield, while establishing a third-party system to do the same would amount to offloading corporate responsibility.
Isn't this the government's job though? To define what is a crime/wrong and what isn't, and to deal with the moral aspect of legal codes? Isn't this why we have laws? I don't think anyone is vouching for "big brother" type government control here, but governments always have a principal role in civil rights, inclusion and the moral development of a nation. It's part of their job, and the reason people care to have leaders with strong moral compasses as public servants. Has this view of government been lost in the US?
> There’s no reason to expect every platform to adhere to the same content policies, but every reason to want them to exercise judgment and accept accountability.
I'm there's things that should clearly be blocked or allowed. But the gray areas are what's being discussed here, and honestly I don't think that should be Facebook's job (or any private, American, for-profit corporation for that matter).
This is just flat out incorrect.
The Bill of Rights was the people saying "You want a US government ruling over the several States? Here are the rules for what you're allowed to do." The US gov't never made a decision on speech, they agreed to the terms of "We, The People", full stop. You're acting like it's the opposite.
This is also why the gov't can't just rewrite the 1st amendment willy nilly—all of the US government's legitimacy with the American people comes from respecting the Bill of Rights. Stop respecting those rights and a rebellion is right around the corner.
Historically, the States had to approve the Bill of Rights (and the rest of the new Constitution) in order for it to come into effect. It's definitely NOT the US gov't "deciding", and it never was. All of the power resided in the States.
Interestingly, the "Federalists" that (re-)wrote the Constitution were mostly opposed to the Bill of Rights—that's the contribution made by the anti-Federalists to the US Gov't as it exists today.
Isn't it funny how most citizens treasure the Bill of Rights more than anything else in the Constitution? That makes it pretty clear which "side" had the well-being of the people in mind.
Specifically, to prove slander, the defamed party has to demonstrate intent and damages. Same with fraud. Both of these are cases of "your right to swing your fist ends at the tip of my nose."
The US government isn't going to decide what is "harmful" at least not for long as a legal challenge would quickly do away with any such rules.
In the end the platforms have to decide for themselves and police themselves unless they want to be 4chan or whatever the next "post anything" kinda place is now a days. Right or wrong they're the only folks who can do it.
What? If Facebook decides it, it's a threat to sovereignity of countries other then the US, but if the USG decides it, it isn't?
Lol, what does this even mean? You want a Theocracy with Zuck writing the scripture? Are your views tainted in any way by Facebook Incorporated?
The government and their role in defining "harmful" has been defined already. It's called the First Amendment. Might want to check it out.
People have been crapping on Facebook for making judgement calls on what's allowed on its platform. When Facebook does so without government backing, people get up-in-arms saying that it should be the government's role to decide free speech. But when the company goes and requests that government start doing that, people get up-in-arms about how it'll be "regulatory capture".
I can't predict what the outcome of all of this will be. I can however, predict that there will be an angsty article (Bloomberg? New York Times? Wall Street Journal?) decrying how Facebook is awful, evil, and incompetent.
I was giving it the benefit of the doubt until it started talking about how bad GDPR is. GDPR has some short-comings, but it's a bit less black-and-white than what this article makes it out to be.
Maybe what Mark Zuckerberg saying really is self-serving but this article doesn't make any good arguments for it. It's just making assertions which it doesn't back up. I don't know if it's just me getting old, but I started noticing this kind of intellectual dishonesty more and more recently in the media and even here on HN.
What competitors is the author even talking about here? There are, as far as I'm aware, essentially no competitors left to Facebook anymore.
That being the case, I'm not sure that statement is correct at all. On the contrary, it seems like if FB were required to offer an easily exportable data format, that other services would pop up overnight to try and lure people away from FB onto their (hopefully) more privacy conscious platforms. It would also lower the bar for people to make the switch to something else, as they know their friends can switch just as easily without losing their data.
I agree with the rest of the points of the article, but I don't see how data portability can be anything but a net positive for the consumer.
Ideally, any posts that your Facebook account could see would be fetchable by your non-FB account, and anything you post on your non-FB account would be re-posted on FB for you.
This way, as far as your friends are concerned, you are a normal Facebook user, but you would never actually browse the site (or use the app). That would allow competition in how the UI works, but also less ad revenue for Facebook (and more engineering work, which they would presumably claim is unfair).
The problem with this is that it presupposes a certain data-format, that doesn't allow for innovation. How exactly would you export facebooks data to twitter or vice versa? For a start, facebook doesn't have a character limit, twitter does. Twitter threads can be nested infinitely, facebook's can't, etc.
Can anyone on the inside give some insight into current morale? Is the constant deluge of negative press cutting through the indoctrination or nah?
I think that feeling is the group narrative talking. It's a thing we (HN, reddit, etc) choose to do together, rather than a specific fact that's true or false.
Former Facebook employee here: One thing that you don't appreciate from the outside, is that FB employees hear more FB criticism than you do. (This is just true of anyone working for any company.) And one of the side effects of hearing more, is that you hear a lot of very wrong, very poorly thought out stuff. So it does get easier to sit out from a group narrative like this, because the parts that are wrong or misleading jump out at you.
Morale is high because paychecks are high. How much you get paid is the key indicator of how much you are worth as a human being. That’s the game we play.
I'm curious what your experience is with this? I've worked at Google and AWS and I don't think I've met a single person who thought they were "changing the world".
The closest it ever got to that was when I once remarked to my lead that I wish I felt like the work I do mattered to which he replied something along the lines of, "We play but a small part in the grand movement that is the progress of technology, but don't forget we do play a part."
Pretty much everyone I know just does their job because they want to retire or support their families. This is also the same for almost every employee I've seen in other sized companies.
> How much you get paid is the key indicator of how much you are worth as a human being. That’s the game we play.
Some people may think this way but it's a choice to surround yourself with those kinds of people.
You can look at an ape in the jungle as getting the most food. Is that the worth of that ape? To the apes, maybe so. To us, it's just an ape.
In your view, are there ways of "extracting value" that are not "some kind of exploitation". Is "changing the world for the better" incompatible with "extracting value" somehow?
> It's almost comic villainy - like, everything they say, do, propose, etc. is always evil in some way.
The problem I have with it is that you can replace the meaning of "they", and have this comment "work" for any number of biased echo chambers. Go ahead and replace "they" with Facebook, Google, Obama, Trump, Exxon-Mobil, scientists, billionaires, politicians, Internet trolls, the media, ad nauseam.
If this sentence can work for any of these scenarios, verbatim, then what's the point of uttering it?
To bring this back to the issue at hand: is there anything Zuckerberg could say that would make you not respond this way? And if you can't come up with anything, are you sure you aren't completely biased against Facebook such that nothing they can do will ameliorate you?
That said, let me try to answer why you're seeing this pattern.
Facebook has a business model that works for certain strategic choices, and doesn't work for others. A big one is they don't charge a subscription fee to users. That puts hard constraints on how they generate revenue.
> is there anything Zuckerberg could say that would make you not respond this way?
That's why it's nigh impossible that Zuckerberg will come along and announce, "we've ended our policy of treating users as the product."
Personally, this was one of the reasons I've been happy to work at firms that sell specific products and services. It doesn't guarantee they'll behave well, but the fact that customers can take their money elsewhere does tend to keep them grounded in the long run.
I think most social media, for that reason and others, has a bad business model. I'd like those businesses to fail to clear out space for alternatives to be built. (And they're not villains working there, so I'd also hope they all land on their feet and find new jobs.)
There's practically zero chance that will happen - either that the existing companies will "clear out" or that the same role would be filled by anything with a different business model. In the real world where Facebook will continue to exist, is there anything that the company or its employees could do that would meet with your approval?
Are you going to justify that claim? It's been made repeatedly about prior companies that were too big to fail, especially tech giants, and they all fall or become obsolete.
> In the real world where Facebook will continue to exist, is there anything that the company or its employees could do that would meet with your approval?
I gave an example: "we've ended our policy of treating users as the product." You're repeating this rhetorical question without addressing any of the points I made or adding anything new.
I was in this industry through all of that. It takes a long time for a company to fall that far. Yes, given enough time anything can happen, but I'm not interested in tautologies. In any time span relevant to this conversation, it's not going to happen. If you want to claim otherwise, that's your burden not mine.
> "we've ended our policy of treating users as the product."
OK, so I guess there is that one unrealistic possibility. Even if Facebook moved to a subscription model, "treating users as the product" is not a phrase they'd use for what came before. So you still haven't indicated that you'd be satisfied by anything that could actually happen.
> You're repeating this rhetorical question
It's not a rhetorical question if there are multiple valid answers, and repeating a question is still better than repeating statements based on counterfactual assumptions. If you want to play "high school debate champion" you'll have to try harder.
If they made that promise to investors, they could go to prison.
> OK, so I guess there is that one unrealistic possibility.
You're demanding that everyone else has to show that they're ready to abandon their principles, lest you judge them instransigent and unreasonable. What changes have you made to your worldview, without seeing something significant change?
Probably never: your views are shaped by observations of concrete realities and reflections on the consequences of those. And unless those realities change or you get new information, you couldn't come up with a coherent set of new views if you wanted to.
> It's not a rhetorical question if there are multiple valid answers
It's still a question intended for rhetorical effect. I wasn't saying that makes it invalid, it's just annoying if you're repeating it while dismissing the answer given.
It’s like being a henchman, only you’ve managed to convince yourself it’s honest work. And maybe now it actually is.
Take some responsibility. Are you evil for signing on? No, but let's not pretend they're welcoming you because you're going to shine a light on unethical practices. Are you evil for staying? That's a judgment call you have to make in your own heart-of-hearts, measuring what you've enabled and contributed, who you've enriched--against the moral influence you've exerted (not to mention the questionable influence you've empowered.)
> I and literally thousands of others are here, solving technical problems, trying to influence the non-technical ones, and if you think we shouldn't be than that's just counterproductive by any standard.
There is a standard. I offer you a baseline: if you're solving technical problems and you're systematically failing to influence the non-technical ones, you shouldn't be there. I give you the benefit of the doubt. Your failure is likely measurably less than systematic.
But enough so? Bear in mind the alternative is not yelling at Facebook from the outside, it's building a better world, a better product, a better community...elsewhere.
And how many of the critics are doing that? How many are making a positive moral difference at their own companies, however good or bad those companies already are? I posit that moving forward at a "bad" company is more valuable than standing still at a "good" one. In my group at Red Hat, generally regarded as a "good" company, our project plans were influenced by demands from some pretty shady customers - big banks, government agencies, even a Russian propaganda agency. At Facebook, my work supports data scientists who are detecting and eliminating that same kind of propaganda. But I'm supposed to feel worse about myself now?
You talk trash about freshman ethics, but that's exactly the kind you yourself are engaging in. I was in fact a philosophy major long ago (though more logic and metaphysics than ethics) so I should know. ;) It has been over thirty years since I found such simplistic arguments even slightly amusing.
I am going to butcher this, I can't articulate it how I think it... Why do we need regulation for social network that is only as strong as its network. Is the network effect so strong users can't or will not leave on their own accord? I left FB and instagram a few years ago. It is just so funny to me that this entity (a people aggregator) is powerful but all we need to do is leave it. How many people in your network have to leave before it loses its value? Networks expand exponentially. Is the opposite true?
* no ban of any user whatsoever
* obligation to provide a public unlimited read only acess to standardized api for all the data
* no private messaging
and let a million better Facebooks emerge. The content policing problem will solve itself
"Regulatory capture is a form of government failure which occurs when a regulatory agency, created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate the industry or sector it is charged with regulating." 
Even if he doesn't get to dictate the regulation (thus not achieving regulatory capture), it would still accomplish the mission of keeping the business alive. They will continue selling their ads and making money, regulation or no regulation.
If he specifically described what a transparency report would say, 99% of hot takes would be about how it's wrong or whatever. People's brains would turn off and the opportunity for a conversation would be squandered.
There's an opportunity here for more sincere activists to promote meaningful transparency, because now he's put it on the radar of otherwise low-tech-literacy audiences.
For me, true transparency is being able to look at code. If your code is interpretable, and it looks okay, it is okay.
When it's difficult to interpret, like with a collaborative filtering algorithm used to select newsfeed items, an independent group ought to be able to query the "baked" system or measure user behavior and come to its own conclusions. I wouldn't call this a "transparency report," it's more like an "independent analysis."
It would take a while to determine what questions to ask. But to actually answer them would take an afternoon: a database connection, really. After all, they have incredibly comprehensive user behavioral data, so you can pretty much answer anything you can think of.
There's no cost burden or competitive disadvantage. Their entire tech infrastructure is built around answering user behavior questions easily. And most of the value is tied up in the data, not the models.
The problems they're experiencing are related to their particular implementation of collaborative filtering. If outside groups could see that code (at least), it would be easier to say, run a simulation, and demonstrate what kind of user behavior (or enemy action as it were) makes 10% of people miss 99% of "benign user content" and see 1% of enemy action content.
Maybe you don't believe it's implementation related, and you think it's something innate to social media. Well, what if the newsfeed just showed content randomly? Clearly that would bury enemy action content, just because it's so relatively infrequent. It's important not to be too dogmatic with this sort of stuff, because it will obscure your ability to reach across the aisle to stakeholders like Facebook and get them to do stuff in a sincere way.
So at its most basic level, sharing the code can't possibly lead to regulatory capture as you describe it. It would be more advanced than the status quo, which is, "Whatever the hell Facebook wants you to hear."
I think this should really give a strong impression. Venezuela went from one of the most prosperous nations in the world to what is likely now the worst place in the world for a poor person to live in, and it only took a few decades. There's no reason the US is immune to the same fate, and it started long before socialism with things like regulatory capture.
"The Venezuelan economy has a profusion of restrictions and impositions on businesses, particularly in the areas of labor and new business formation. Only the big, well-established companies can afford the high regulatory burden. This is consistent with George Stigler’s (1971) insight on regulatory capture, which abets government creation of monopolies stymieing competition. Further, the high levels of informal or underground economic activity in Venezuela are consistent with existence of a high regulatory burden (see Diaz and Corredor 2008). Unsurprisingly for such a setting, corruption is rampant." 
I didn't find the post you're referring to as having added anything additional to the conversation and I also didn't find it entertaining, which I believe was the poster's goal.
However, I wasn't struck with some kind of "this is a bad thing from other platform" vibe.
One thing I have learned is HN readers like Weekend at Bernie’s—-where my tongue in cheek defense of the film garnered the most comment upvotes I’ve ever received.
Also on Reddit it's frustrating when I'm writing something informative and it gets hidden between the memes and the jokes, so I tend to stick to jokes there.
I’ll often have a knee-jerk post that’s either funny, caustic, or redundant. I find things go best here (with some ups and downs) when I focus on providing genuine info or insight that others aren’t likely to already possess.
This pressure is a Good Thing imo.
Otherwise I tend to just ask questions to help me understand the article from people who actually know.
But my first reaction to most articles is a dick joke. That belongs on reddit, not here.
Normally an extreme appeal to emotion. In this case excessive use of caps-lock and a fictional narrative that doesn't substantiate itself.
beering’s sibling comment made essentially the same point in direct expository form, and was much shorter, so I disagree with the suggestion that that rambling narrative is any sort of shorthand.
I have been on HN for more than a decade. I tire of the higher-than-thou bullshit where a dialog of context for various subjects is shunned from some taboo on delivery of the premise.
So I reject your position.
Would you like me to cite things that I know personally about FB, or cite every single compacted reference made in that comment at great length so you can feel superior in your understanding that you may not have?
I mean -- FFS - understand that there are people who want to make a condensed comment on shit that happens in the tech field but I don't want to write a diatribe wit ha bibliography on why "because screw, them, that's why" is a salient summary.
(See: on HN "google deep-sixes usenet" on FP today)
Maybe for some people. I'm honestly not at all sure what you hoped to communicate with your previous post.
Hell there was even a poster in a different facebook story talking about how it was fun to tell headhunters they refused to apply to facebook because of ethics.
This would hit the privacy-invading companies where it hurts, in their pocketbooks.
That said, there should be a law on the books that disallow the sale of data between companies. I'd wager this is the bulk of where Facebook makes it's money. That's where we should be targeting with legislation.
The use of data should be fine. As the end-user or end-company never actually get their hands on the data. They just use a UI with categories and filters and put up ads based on the options available.
Of course, these ads are highly targeted using all sorts of personal data, but they aren't actually selling the data, as that's also just less profitable.
But is there a distinction in advertising between:
1) Allowing users to use the data for making ads.
2) Selling the data to companies such as Apple for other means.
There isn't any visibility. This is what I'm trying to get at.
>> Mobile advertising revenue – Mobile advertising revenue represented approximately 91% of advertising revenue for the first quarter of 2018, up from approximately 85% of advertising revenue in the first quarter of 2017.
In particular, Facebook makes its money from mobile ads
A user won't see much benefit in data profit sharing, but the companies will definitely see the costs. The effort to set up payout channels for every user would drive them to look at other revenue sources.
Plus, if all user data comes with a 50% cut, and their biggest income is based on user data, that's significant to the company.
Well, I would rgeulate facebook and google so that they are required to offer their services to me at production cost + some decent margin, while guaranteeing that if I pay the price, my data is not monetized or shared in any other way to third parties (including using my data to train any kind of machine learning models)
Also, fb, ig and whatsapp should be split into separate companies. Google web search might as well be separated from everything else.
An interesting example is the yellow vest protest in France. The mass media heavily emphasized the violence of the protesters.
But, on social networks, people mainly shared videos of violence from the police. (ex: https://twitter.com/Albirew/status/1104799813076508673 )
Another interesting news that was discovered thanks to social networks was that the public TV photoshoped a picture of the protest, changing a text from "Macron get out" to just "Macron" 
The contrast between those two sources of information is striking.
Both top-down mass media, and bottom-up social networks bring interesting news, but both are biased and inaccurate.
It was actually both.
I can actually get behind this. As I said before, I'm wary of Fb and Google becoming judge and jury of what is or isn't acceptable speech.
It's against their will now, but they may well grow to like it.
That said, obviously they shouldn't get to craft their own regulation. And any regulation setting the boundaries of an industry risks locking in the way it works (think how how awful dealerships are in the US). If we craft a set of rules for Facebook, we will almost guarantee that it continues to exist.
My (very weak) opinion as someone who hates FB is that we keep the status quo - the constant pressure and hoop jumping and negative media cycles seem to do quite well at driving users away.
Data portability is the first step in allowing other companies to compete with FB and takes away a huge advantage they have (switching costs). Call a spade a spade.
Outside of the authors' views on Zuck.co, this is a myopic view of GDPR. In reality, GDPR empowers, rather than annoys users, and the regulations actually gives EU citizens some degree of control over their data - which is of increasingly huge benefit, especially their interactions with Orwellian companies like Facebook.
It is good this article is posted as 'opinion' as it is quite opinionated (doesn't reflect my opinion though).
Yet whenever we hear about Facebook culture, it’s about move fast and break things. Is that really a culture equipped for coping with regulatory compliance? In my experience, the answer is no.
So I’m really left scratching my head on this one.
This sentence implies there are instances (albeit "rare") when it is a good idea to outsource regulation to the enterprise that most needs to be regulated.
Can anyone cite a few examples?
Harmful content => Any government regulation of harmful content will probably hurt small players more, since it's a big fixed cost of moderation. But everyone will realistically pay that cost anyway unless they're something decentralized like Mastodon.
Elections => Nothing to do with size of platform I think, they're just saying that it's disingenuous since filter bubbles are more important. I disagree, election financing is super important and as another poster said filter bubbles are as old as print.
Privacy => It says that GDPR is terrible for small companies, but without any evidence (or spec of "small") I think for anyone the size enough to be looking to challenge Facebook it's not a big deal, and broadly I like GDPR. "Don't store people's data unless you need to, and ask permission". Not a bad first effort to trying to fix surveillance capitalism.
Making data portable => This definitely, definitely would hurt Facebook more than help it. Noone is sitting on Instagram thinking "I'd swap to Facebook if only it was easier". They're the monopolist with the inferior product + network effects, that narrative makes no sense at all.
The "sophisticated tools" to make data portable for example: building it in to any new social network wouldn't be all that difficult, and perhaps adds even a little mental structure over the data since now there are two types of output: the website/app itself, and the portable data export. Sounds good to me.
My own take is that this would be a bad move for Facebook and is either genuine charity or hubris from Zuck, he genuinely might think Facebook is a great platform that can out-compete rather than just a network effect monopoly.
It is good to see this being discussed. I am not making light of that.
The more I see the market shape our Internet, the more I feel some baseline law, to set reasonable, ethical norms, makes great sense.
The elephant in the room is advertising. This is all about ad targeting. Zuckerberg's "rules" totally ignore that. Of course.
The GDPR is a net positive for citizen privacy. This article's attempt to spin it as an anticompetitive bigco-driven monstrosity is factually wrong and ignorant of the GDPR's history.
I have similar (but less strong) doubts about this article's case that automated censorship is obviously better for "everyone except Facebook" if the rules about what to censor were set by tech molochs and not governments. I'm wary of any kind of censorship, but if we have to have it, I prefer the rules to be set in a public parliament and not some boardroom somewhere. "Corporate responsibility" my ass. At this scale, Facebook are public infrastructure and should be regulated as such.
I'll gladly believe the author that Zuckerberg will always and only do things that benefit Facebook. Zuckerberg has shown time and time again that every single public initiative he takes turns out to be in Facebook's direct benefit and often in everybody else's disadvantage. Of course there's a hidden agenda here, there always is.
But the author hasn't found it.
Please wake up and smell the coffee. Regulations are needed but GDPR is absolutely not the answer.
, p: zaZLc&ncedu%X8apb$j9
Have at it.
I appreciate the gesture you made, but you're exposing your friends and families to the random dregs of the Internet by giving away your account like that.
> I appreciate the gesture you made, but you're exposing your friends and families to the random dregs of the Internet by giving away your account like that.
That's kind of the point. FB isn't going to change until it starts to suffer, and the a good way to cause it to suffer is to reduce its usefulness.
The best way to do that is to make the feed useless.
shrug Whatever. The account's locked now and I don't really care enough to unlock it.
How does making your family deal with extra bullshit from the awful parts of the internet make facebook suffer? I don't understand your logic with this.
b) Facebook will only suffer if the feed becomes (more) useless.
I don't understand how you don't see the logic. Bad feed -> worse FB experience -> decreased usage.
> How does making your family deal with extra bullshit from the awful parts of the internet make facebook suffer?
Assuming this is your real account with your real friends/family (which seems to be what you're saying, since you're convinced this would give real people a bad feed) then you're just exposing people who trust you to trolls.
I could text your friends/family with spam, and since it's coming from someone they know, they might click it. I could ask them for personal details. I could just straight out insult them out of the blue and hurt their feelings, or send awful things their way.
I really hope nobody did and in general would like to think people here wouldn't— but this is a public site and the internet is a big place.
I doubt for your friends this would register as "their feed becoming useless"— it'd mostly just register as you being awful out of the blue, until you explain you were hacked. In any case, creating no content and letting people know you are only reachable through other services is probably a more effective way of making the Facebook less useful for the people you know. You could also just post noise yourself if you were committed to do so.
And OP responded with:
> That's kind of the point.
I don't think OP thinks the point is to expose friends to potential damage, just noise and nonsense unrelated to him. I was explaining that if that's what he wanted, he can "post noise" himself, as opposed to opening his account to the public internet to ensure the safety of his acquaintances.
I’m not arguing that limited liability is perfect - just that we’ve built much of capitalism (including our ability to invest) on top of it.