Hacker News new | past | comments | ask | show | jobs | submit login
Zuckerberg’s Rules Would Hurt Everyone but Facebook (bloomberg.com)
592 points by howard941 49 days ago | hide | past | web | favorite | 250 comments



> Facebook exacerbates poisonous politics by creating filter bubbles of like-minded partisans, spreading hoaxes and inaccuracies, inducing anxiety and paranoia, rewarding clickbait and outrage, and so on

The irony is that the media is doing this now, with respect to anything related to Facebook. Facebook is a website with 99.9% benign user generated content. The news media has been spreading outrage and paranoia and trying to get 'engagement' since before Facebook was born.


That's a little disingenous. No doubt you are correct that 99.9% of the user-generated content is benign, but that 99.9% of user-generated content would likely not represent 99.9% of viewer traffic.

If only 0.01% of toxic content by terms of volume is receiving disproportionate attention and focus, not only permissible but actively encouraged by complex opaque algorithms, then it is certainly a much bigger problem then it's meagre character count would otherwise suggest.


> If only 0.01% of toxic content ... is receiving disproportionate attention and focus, not only permissible but actively encouraged by complex opaque algorithms, then it is certainly a much bigger problem

To me, this needs to be split into two use cases. The fact that .01% of content gets viewed a lot more than average is not a bad thing. Much of user content is junk with a heavy dose of ads for a seasoning. It is perfectly reasonable that original or thoughtful posts, even if unpleasant or provocative, get much higher traffic. The problem is the algorithmic reinforcement that encourages those making short-term sensationalist statements.

Let's still allow folks to voice their opinions, whichever they are, and advertise them by the digital age equivalents to "word of mouth". Just keep it human, not algorithm driven. My 2c.


I'm sorry but let's not dress this up. The content with high engagement in facebook is almost exclusively lowest common denominator crap. It's not quotes kant and satre - it's quotes from LadBible and Kim K.

I think we can all agree that quantity isn't the right metric for this - whether that's number of posts or amount of traffic on a given post. What we're really trying to do is gauge what the conversation is.

Having said that, there are clear distinguishable problems we can point to that have an interesting quantifier. Anonymous 'Campaign' groups run by individuals that have previously been associated with Brexit campaigning are currently spending millions on Brexit advertising. There is no indication of where the money comes from to pay for these (we know it's not from the people who are associated with the page because they're just staffers).

In that way I agree with you - the thing to worry about is where the ad dollars are spent, but I don't think that's the only thing we should be worrying about.


Sturgeon's Law states that 90% of everything is crap [0].

The 1% Rule says that in any online community, 1% of participants create content while 99% lurk [1].

I'll propose Panarky's Corollary that 90% of the crap is produced by 0.1% of the participants and then shared, liked and amplified by 90% of participants.

These 0.1% pathological participants are corporations advertising, governments propagandizing, zealots proselytizing, and trolls antagonizing.

Facebook can see these patterns better than anyone, and they could eliminate or de-rank the material from the 0.1% if they wanted to.

But the 0.1% is disproportionately incendiary and therefore profitable, so Facebook has a very strong incentive to look the other way.

[0] https://en.wikipedia.org/wiki/Sturgeon%27s_law

[1] https://en.wikipedia.org/wiki/1%25_rule_(Internet_culture)


I don't think that solves anything. "Human driven" is precisely what forwarded messages on WhatsApp are. FB is feeling the need to limit the number of people you can forward something to in order to solve the same problem you're attempting to solve.

My presumption is that by far the single biggest signal on whether to display a post on the FB feed to more people is how many people like and share. That's a people problem at the core, not an algorithm problem.


> My presumption is that by far the single biggest signal on whether to display a post on the FB feed to more people is how many people like and share. That's a people problem at the core, not an algorithm problem.

It's still an algorithm problem at its core, because the algorithm is what determines what the "single biggest signal" it will use for amplification is. There's no reason why maximum shareability should be the main factor, it could be something else like some proxy for erudition, for instance.


Even if I accept your claim, the parent was suggesting getting rid of the algorithm and just letting it be human driven. Human driven means most shared is what gets most seen. How is that different than the current algorithm? It doesn’t change anything. That’s my point.

However, this shows that the root problem really is human. You can tell because you are suggesting designing an algorithm to fix a pre-existing problem. Without algorithms of any sort, this problem is there.

I don’t think it can be solved by algorithm.


There’s no proxy for content quality.

Simply: it’s all content and it’s impact depends on the people who take the information in and then apply their beliefs and understanding to it.

Some content is simple - recipes for example.

Others are impossible to parse: “they want to reduce hate speech” translates as “they want to remove conservative/right wing speech” (this actually happened in a conversation yesterday. Suggesting that the term hate speech is being transformed/co opted)

Engagement in contrast, is simple, reliable and measurable.


> There’s no proxy for content quality.

I don't think that's true, it's just that the proxies are imperfect. But the specific alternative metric wasn't that important to my point.

> Engagement in contrast, is simple, reliable and measurable.

That's the root of the problem: the algorithms have been designed to use "simple, reliable, and [easily] measurable" things as their whole input, and then narrowly A/B tested to maximize some other easily measurable output. Looked at holistically, it's too simple to do an acceptable job.

If you took a similar attitude, and applied it to create an algorithm for another discipline, like medicine for instance, it'd be pretty clear it'd be a disaster.


I agree, that's a very useful and necessary distinction that I neglected to include in my original comment.

Without those spontaneous user-composed opinions and anecdotes, the platform would quickly become (or perhaps already has become) plastic and artificial. As anyone having to relate information through a bureaucratic communications department has certainly experienced, even the most unique, interesting and useful story in the world stands little chance against an eager team of copy editors and PR representatives.


> No doubt you are correct that 99.9%

That's not my starting assumption.

I think Facebook has a problem that its typical so-called "benign" users are creating less and less content, while malignant content-makers are not discouraged.


They acquired Instagram and keep acquiring small startups exactly for this reason. Content generation direct to FB is going down because the platform is not innovating anymore, and for now their only way out of it is through acquisitions.


I'm puzzled why Facebook is getting the blame here. If you are seeing toxic content in your feed, it's because your friends put it there. It's not Facebook's fault that your friends suck.


> If you are seeing toxic content in your feed, it's because your friends put it there.

Not really. Facebook put it there for me. Time and again it shows me items like "[Friend] replied to a comment on [toxic thing]", where the thing gets a prominent preview etc. I haven't seen a real update from [Friend] in my feed for months, yet there are plenty when I visit their profile. Even if the friend is fighting the good fight and has commented to correct some misconceptions, it's clear that this particular action was selected for my feed because it's related to a controversial item.


Yes, but your friends share content that sucks because the Facebook algorithm fills their feeds with click-bait, with the sole purpose of increasing engagement and user retention metrics.


No. The most toxic content is often comments, which are shown to me even when I don't want to see them.

If it weren't for comments from non-friends, Facebook would have no toxic content for me at all.


If you go to the persons profile and click unfollow then you won't see their comments on posts that are now yours.


I can't do that with billions of users. I'm saying that I'm shown comments by complete strangers on every post that shows up in my news feed. There's no way to turn that off.

Facebook was great until it started inserting posts from people I didn't explicit whitelist as friends, and I'm including news articles in that category.


You must have your privacy settings to "World" change it to friends or maybe Friends and Friends of Friends and you will be better off. Sometimes when I want to say something on a subject I will go to Facebook and do a search on that subject and post on World readable posts with the opposite opinion of my own.


Maybe it's that facebook encourages the content from your friends that is toxic, because that toxic content leads to more clicks, engagement, and ultimately revenue.


Not to mention that it's entirely possible to create very bad effects via benign content. Facebook (and other social media) use has been linked to a significant rise in depression.

https://www.thisisinsider.com/social-media-link-depression-l...

https://www.cbsnews.com/news/rise-in-suicide-and-social-medi...


> If only 0.01% of toxic content by terms of volume is receiving disproportionate attention and focus

You mean like the rumour-mongering, low-effort politics etc. in your standard tabloid?


This is a case of “What you permit, you promote.” People focus on the 0.01% of a platform’s content that’s bad/toxic because that is what defines where the platform draws the line on what’s acceptable.


Isn't "What you permit, you promote" literally intolerance?

The whole point of tolerance is to say "I don't agree with that, but I respect your right to do your own thing."

This is advocating the exact opposite. Under this philosophy I can not longer be tolerant, because anything I "permit" I will be accused of promoting.

This seems like a recipe for a horrible, horrible world.


While I wouldn't call Facebook content benign, I definitely try to be mindful when I see articles like these that news sites like Bloomberg are so eager to attack Facebook because Facebook is the biggest and most direct threat to their business model.

There's certainly a lot of irony in this article lambasting Facebook for spreading outrage culture, itself written in such a vitriolic tone


While I wouldn't call Facebook content benign, I definitely try to be mindful when I see articles like these that news sites like Bloomberg are so eager to attack Facebook because Facebook is the biggest and most direct threat to their business model.

This is an excellent point, and I'm a little embarrassed that I didn't catch it myself.


Bloomberg's business is mainly its subscription based financial analytics platform. How exactly is Facebook attacking that? With the new EU Copyright Directive they can't even redistribute news content without a license from the publishers. Facebook has little to no content of its own, the vast majority of it is user generated. Now its obnoxious CEO talks about rules and regulation - mainly for others. Because if Facebook ever wanted to self regulate its content it would all be in their TOS just like their no nudity policy.


Google is the biggest and most direct threat to the publisher model. Publishers don't earn (good) money off social traffic, it's just not what advertisers pay for so if you're an add driven business you want Google or Pinterest (search) traffic.


The biggest and most direct threat to the publisher model is other publishers.

Publishing used to be hugely profitable because you needed a printing press or a big radio tower, which cost a million dollars, which made it rare and therefore valuable. If you wanted to advertise there weren't that many publishers to choose from in a given market, so they got to charge high rates.

Then the internet smashed the economics of that to smithereens. The New York Times now has to compete with your blog. And your blog doesn't have that much traffic, but the sum total of all blogs and social media actually does, so the number of ad dollars going to traditional publishers has gone way down.

People always point to Google because they sell advertising and get their cut when they do. But the thing Google replaced isn't publishers, it's ad sales reps, and nobody weeps for them. The "loss" isn't the commission going to Google instead of some ad rep, it's the tiny amounts each going to millions of bloggers which used to get summed up and pay the salaries of thousands of reporters.

Which is why the publishers are willing to resort to underhanded dirty tricks as they did to get the EU copyright directive -- because their goal is to shut down all of this user generated content which competes with them for lucrative eyeballs.


>The irony is that the media is doing this now, with respect to anything related to Facebook.

You could have stopped at the comma. How many people out there regularly watch Fox News and MSNBC? Or read The Rolling Stone and the WSJ? We've had partisan bubbles for as long as we've had any sort of mass media.


Yes, but the partisanship is likely getting worse, because we each individually get the exact information that is "relevant" to us and therefore feeds our confirmation bias. It's a different form of tribalism from 1950s mass media partisanship. There's actually research (James Hamilton @ Stanford) that shows when capital costs for printing presses were higher, newspapers became less partisan because you had to appeal to everyone to be profitable. The internet, cable news, are helping to unwind that period.

As background, I've spent years writing on social media (and used to work in it):

My Media literacy guide:

http://github.com/nemild/hack-the-media/blob/master/README.m...

A piece I did for Quartz on Selective Facts:

https://qz.com/1130094/todays-biggest-threat-to-democracy-is...

And my data analysis of the NY Times (with an example in social media):

https://www.nemil.com/s/part2-terrorism.html


Yes, but the partisanship is likely getting worse

This I disagree with. It's bad, but nowhere near as bad as it was 50 or 100 years ago.

It used to be that most cities had individual newspapers aligned with each party, and sometimes with factions within that party. Those newspapers had to moderate themselves when economic pressures forced mergers, which is why so many newspapers have hyphenated names.

For example, at one time the Los Angeles Times was so staunchly pro-Republican that it didn't even report the results of Democratic primaries or candidates.


I agree with your general perspective. I'd guess that the further back in time you go, the stronger filter bubbles were. One hundred thousand years ago when we all lived in tribes and clans was arguably peak filter bubble. These relatively small groups had their own histories, languages, religions, traditions, etc., and often lived in isolation from everyone else. The diversity was probably staggering. Impossible to imagine. Just look at how alien and unique a place like Japan can get after a bit of isolation.

Today we're quite homogenous by comparison. People on both sides of the aisle believe more-or-less the same things, but we zoom in on the tiny differences and exaggerate them. The media does its best to facilitate this, because its business model depends on eyeballs, and eyeballs are driven by novelty and controversy.


>Yes, but the partisanship is likely getting worse, because we each individually get the exact information that is "relevant" to us and therefore feeds our confirmation bias. It's a different form of tribalism from 1950s mass media partisanship. There's actually research (James Hamilton @ Stanford) that shows when capital costs for printing presses were higher, newspapers became less partisan because you had to appeal to everyone to be profitable. The internet, cable news, are helping to unwind that period.

I definitely think it's true that mass media has become more partisan. But, as you note, it's not coming from social media. It's coming from the cost of delivering content being driven to practically zero while simultaneously being able to reach nearly everyone in the world.


Is it? We've had tabloids here in the UK for years, and it seems the more extreme ones have generally had higher circulation numbers than their more moderate rivals. And the media here have been aimed at partisans for a long time.

https://www.youtube.com/watch?v=DGscoaUWW2M


Social media is slightly different in that content is catered to the individual, so I'd guess that it does a slightly better job at siloing people. It also allows others on the platform to provide the catered content as opposed to the media corporation providing you with their political vision (although I'm not sure which is better or worse).


If you're, say, to get non-political for a moment, and talk about flat-Earthers, the media doesn't put finding like-minded delusional individuals one click away.

Facebook, Twitter, and Reddit do.

That's the fundamental difference. You can get a biased, partisan opinion from whichever tabloid you read... But that tabloid doesn't make it trivial to discover that there's two dozen like-minded lunatics, that you can get together with, to get validation for your crazy ideas.

Reading, or having a subscription to the Wall Street Journal doesn't give me membership in the political club it's targeted at, or a hotline to a dozen industrialists. Reading something shared by the Flat Earth Society of Truth on Facebook does, because the link to their Facebook group is right there, and they are looking for members.


Also, social media sites can recommend disturbing content to people based on their history. I think it goes something like: you’re following accounts related to to depression, suicide and moody black and white photos? You might also like… this page glorifying self harm! The automatically generated, opaquely curated pages like Instagrams’s ‘explore’ can present disturbing videos and themes as if they’re normal or appealing (e.g. the case with Molly Russell).

Also the choices sites make based on your profile sometimes offend or puzzle people as they can seem personal or even offensive. For instance if someone gets ads or suggested content for acne control or weight loss, it could be seen as hurtful or offensive.


Facebook and social media, in general, has done a lot to bring the "dark web" forward to the masses.

What i think sucks the most is that if you silence this nonsense the result isn't that they quietly retreat away but they weaponize the silencing as censorship and it they use that to boost their signal and people see/hear the controversy and instead of letting it fade away like it should that controversy is boosted on social media and it gives them some appearance of validity.

same playbook fascists use


> But that tabloid doesn't make it trivial to discover that there's two dozen like-minded lunatics, that you can get together with, to get validation for your crazy ideas.

In other contexts, it's understood that security by obscurity is bad.


This analogy falls down as soon as you look at misinformation where moneyed interests exist (put another way, as soon as you get political). The media puts delusional individuals front-and-center with an appeal to some bizarre, binary idea of fairness as soon as they report/talk about climate change, etc.


Here's other takes:

> Fox News exacerbates poisonous politics by creating filter bubbles of like-minded partisans, spreading hoaxes and inaccuracies, inducing anxiety and paranoia, rewarding clickbait and outrage, and so on

([1])

> CNN exacerbates poisonous politics by creating filter bubbles of like-minded partisans, spreading inaccuracies, inducing anxiety and paranoia, rewarding clickbait and outrage, and so on

(removed hoaxes since I couldn't find much in that line, but feel welcome to correct me [2])

> Big media companies with political interests exacerbate poisonous politics by creating filter bubbles of like-minded partisans, spreading hoaxes and inaccuracies, inducing anxiety and paranoia, rewarding clickbait and outrage, and so on

([1], [2])

[1]: https://en.wikipedia.org/wiki/Fox_News_controversies

[2]: https://en.wikipedia.org/wiki/CNN_controversies#Persian_Gulf...


One major difference is "the media" is not a monolithic entity with singular purpose, while Facebook is.

I also don't know of any other media property that has Facebook's "circulation"; Facebook has around 2.3 billion "subscribers".


> "the media" is not a monolithic entity

Nor is FB, in the context of this discussion. Those 2.3 billion people are not 'subscribers', they are content creators and distributors.

It's not like Zuck and his staff personally come up with nefarious stories to put on FB and indoctrinate the masses.


But the algorithms that decide what content to surface come from one place with one intent. Newspapers can't globally decide to emphasize particular stories and have as great an effect.


"Newspapers can't globally decide to emphasize particular stories and have as great an effect."

One way to read the new's industry's issues with Facebook is that, actually, yes they did, and they're trying to protect the turf that Facebook is encroaching on.

The media has an incredible power to set the agenda, in both a positive (choosing to cover) and negative (eliminating a story by simply ignoring it to death) sense, and it uses it to an extent that can only be called "routine".


Imagine setting the agenda on a per reader basis. Newspapers set the agenda for their entire audience. Facebook sets the agenda for each personally identifiable reader, or specific groups of readers. Facebook tracks what each one reads, when, etc. Having billions of non-anonymous web users all visit the same website, tracking and recording everything they do, coercing them to supply all manner of personal information, then manipulating what they see based on the collected data, this is a propaganda machine with power equally as incredible as the media.


Have you heard of the Sinclair group?

> The company is the largest television station operator in the United States by number of stations, and largest by total coverage; owning or operating a total of 193 stations across the country in over 100 markets (covering 40% of American households)...

> Sinclair's stations have been known for featuring news content and programming that promote conservative political positions, and have been involved in various controversies surrounding politically-motivated programming decisions

https://en.wikipedia.org/wiki/Sinclair_Broadcast_Group#Polit...


I have, but I think Facebook, as a media company, is what Sinclair can only wish it was. It's a media company that has real-time measurements of what people are engaging with, who they are demographically, and can respond minute-by-minute with adjustments of what kind of "coverage" they provide (for Facebook, this is adjustments to the algorithms that make stories show up in users' timelines).

I think we, as users, bear a lot of responsibility for the content we create and if there's a lot of awful stuff out there... well, we created it. But I think someone deciding to make a bunch of money in concentrating and distributing that awfulness deserves a lot of scrutiny.


Not quite. I think for it to be a more apt comparison, Facebook would need to be pushing a narrative to it's most popular users to promote on their channels, the same way that Sinclair group does:

https://www.youtube.com/watch?v=hWLjYJ4BzvI

I don't think there's evidence of that right now with Facebook.


I’d say it’s the revealed preferences of FB users that decide which content to surface. Facebooks algorithms are just enabling that.

At what point do we accept that polarization (in a tribal sense) is a featurebug in human psychology? Maybe fb is just giving us what we want.


Seems like you've never listened to / watched / read things from News Corp. They globally set an agenda. If you are a minority, it is a very hurtful agenda. It has happened for decades.


"One major difference is "the media" is not a monolithic entity with singular purpose, while Facebook is."

Eh, it's closer to that then you might think if you haven't seen the chart of who owns what, and how few "owners" there actually are. It's not technically a monopoly, but a reasonable case can be made it's an oligarchy.

But no, it's not literally a single entity.

(I don't necessarily endorse this chart fully, you can search yourself on "who owns media companies", but the info is sometimes out of date as they buy and sell things. But it at least generally looks like https://www.webfx.com/data/the-6-companies-that-own-almost-a... )


I'd take issue with that number, since the inertia of authenticated user count, or unique HTTP session count, or even raw network request traffic isn't profoundly relevant at this point.

In fact, it hasn't been for a while.

Authentic user identification ceased to matter, when two things happened. 1. facebook opened its doors to non-collegiate email address account creation (a throttle that actually regulated real names and authentic identity). 2. major news outlets started pushing facebook as part of "The Blogosphere" which offered a fundamental signal to TV land, that viewers could find the next AOL on facebook.

After that, user head count became smoke and mirrors, and a lie. Sure, you can gain an understanding of how many people know who Mario and Luigi are, by how many Nintendo consoles are sold, but that's not what matters, when your making movies, holding conventions, and you've got business agreements cemented with global news outlets, for perpetual free publicity, for as long as news gets pushed through channels like CNN and Fox News. There's enough brainwashing power in that, such that people feel safe to dump casual banter and get sloppy with conversations inside facebook, revealing locations and insight to give a glimpse of a cut of social activity for the safe-for-work crowd.

And that's what Facebook is up to by now. It's just forum software, to replace old school bulletin boards, and offer impromptu mix of craigslist style performance art and classifieds, that maybe grandma might see.


Most books are not "The Little Red Book", yet "The Little Red Book" managed to help cause a lot of harm.

It's not just the fraction of "bad" content in your system, it's also the level of propagation. Most of the "99.9% benign" content will not draw a lot of eyes per piece of content, while I'd guess "bad" content will draw a lot more eyes, and ergo have a bigger impact.


I totally agree with what you are saying, but to nitpick/go on a tangent, the direction of information flow may matter too. From what I understand, the Little Red Book was more of a symbol or product of the cult of personality and hysteria, rather than the _cause_ of subsequent hysteria after its publishing. In other words, there was plenty of dogma to go around that it may not have created more chaos, and had it not existed, some other symbol may have taken its place. It would be interesting to see how much content on Facebook, despite high propagation, is mostly destined for echo chambers, or actually spurring new action and converting others.


Let's start banning books with bad content I guess?


I did not make this argument.

I made the opposite argument in the past, actually. I am a proponent of reclassifying facebook etc as common carriers/utilities and forcing them not to censor any content unless it's actually illegal content. Tho, that does not and can not mean they have an obligation to promote such content e.g. through search results.

However, words do have impact, and may be used as a weapon to incite the readers to certain actions. This can be actions I like, or actions that I find abhorrent.

Then again, the danger that censorship poses to a free society, and the danger of unfettered censorship by private parties controlling major chunks of the "public marketplace of ideas" in particular, outweighs the dangers of bad people publishing legal but reprehensible speech, by a lot. The censorship we allow has to be minimal, and driven by societal consensus or compromise, and right now the state, with a separation of power and checks and balances, while not perfect, is still the far better alternative than letting a Zuckerberg or a Brin or a Dorsey or a Huffman unilaterally decide what's OK and what gets banned.


The percentage of content hardly matters in regards to your point about news being in this business long before Facebook. It's ironic that news sources are pointing fingers here since this has been their CORE product for years, minus the troves of data.

News channels make the extremely rare seem common, and the common seem like it doesn't exist. All to drive ad revenue. Now traditional news is rapidly becoming irrelevant and we see pay walls popping up left and right. I would argue that the news media is worse at propagating extreme fear and paranoia, than Facebook. At least a lot of Facebook content is funny/viral videos, whereas the news is 90+% negative


I disagree with your 99% assertion... About 30% of the stuff I've seen on my feed and others through the years of friends, family, and acquaintances has been opinionated/political to the point of dissent generally just causing posters to double-down or become entrenched and more opinionated next time. Maybe this kind of stuff you consider benign?


You bring up an important point. At the same time, the media is in a precarious codependent position, needing to whip up its readers into an emotional frenzy, simply to stay alive. It's as if a sentient cortisol dispenser were plugged into many humans, and its own incentive system measured "how animated are the people recieving my services".


In fairness I'm not sure any given person qualifies as being responsible for being "the media" or all the other stuff.

There's a large amount of "media" folks who at worst title an article in a way that catch's eyeballs, compared to those actually pushing straight bullshit.

Anyone selling BS to people deserves scorn, that includes Facebook.


I am not sure I believe that facebook content is truly anything like 99.9% user generated at this point.


I'd hazard a guess that, considering there's no real bar to clear for generating facebook content (an email address, maybe a prepaid phone number, and a working internet connection), it might be less than one in ten records is not the work of a business or automation outfit. So less than 10%.

Consider that algorithmic trading in stock markets is insanely noisy, and each trade comes with a fee. Many trades are executed as pricing probes, or efforts to exploit a minute shift in a given price. Assume facebook traffic has similar motives, just to suck impressions into a viral current for any particular campaign of appeal, or at least convince clientelle that advertising with a PR firm actually works. Gaining available eyeballs results in fractions of a penny in certain "users" pockets.


Even if most UGC on Facebook is benign, the system that uses it most certainly isn't. The data harvesting and addictive user design is actively harmful even if the content itself isn't.


> Facebook is a website with 99.9% benign user generated content.

That's speculation, though. There's not enough information publicly available to know for sure.


If you've ever used facebook then you'd know that your description "99.9% benign user generated content" is completely wrong.


This is not empirically "wrong". My feed is completely benign, as is my fiancee's and mom's and sister's and best friend's (the number of people who have left their facebook open for me to peruse).


We found someone who hasn't logged onto their Facebook since 2011


> Facebook is a website with 99.9% benign user generated content

99.5% of Boing 737 MAX never crashed. That doesn't mean that 0.5% crashing isn't a systematic problem that is newsworthy and demands action.


"Facebook is a website..."

That is exactly right. I have no idea what "platform" means, but I know what "website" means. (Or "endpoint".)

Facebook was not always in the "news" business. The original website in fact had no "news" feed and initially users complained when they added this "feature".

The addition of "news" of the kind produced by journalists was added as part of this "feature" much, much later.

Facebook and the news media both sell ad space, but no one stores their personal photos and correspondence with the news media. If Facebook is, as Zuckerberg would claim, not a news organization, then Facebook cannot claim to have made any positive improvement on the quality of journalism. It stands to reason aside from "convenience" there is no benefit to getting news via Facebook's website versus the website of the news organization that produces it. To the extent Facebook manipulates the news that people recieve via the Facebook website, there could be negative effects.

AFAIK, the news media do not create online versions of their newspapers that (outside the control of the reader) automatically change what stories are presented based on the reader's identity. Though with the influence of Facebook's methods, it would not surprise me to hear that this was happening.

Whatever the case, certainly there must be a journalistic ethical principle that would point away from algorithmically manipulating which news reports are delivered to each reader, even if newspapers know the identity of each reader and what each reader has read in the past. "News", as in "newspapers", is the antithesis of controlled communications where not everyone recieves the same information, e.g., within a corporation.

Perhaps an analogy would be a library that knows the identity of its patrons and the items they have checked out in the past. It might be able to make predictions and manipulate which items each patron was exposed to when they visited the library, but it would seem against the fundamental purpose of a library to do such a thing.

The question to ask is what place does "news" (as in "newspaper" not updates on others' Facebook pages) have amongst people's personal photos and correspondence, the "99.9% benign user generated content"?

Why is it there?

Here is some background reading.

https://people.well.com/conf/inkwell.vue/topics/504/Brain-Ha...

https://www.wired.com/story/inside-facebook-mark-zuckerberg-...

"January 2018

Facebook begins announcing major changes, aimed to ensure that time on the platform will be "time well spent."

In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn't Facebook, it was Twitter. The latter's 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook's. "Twitter was this massive, massive threat," says a former Facebook executive heavily involved in the decisionmaking at the time.

So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook's News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines."


[flagged]


This is black and white thinking. No one is entirely bad, no one is entirely good. Very bad people are capable of making very good things. Very good people are capable of making very bad things.


Zuckerberg tho seems to me like a very bad person making very bad things most of the time.


> very bad person making very bad things most of the time.

That's your slanted viewpoint based on what you've been hearing in the news media. Why is he a "very bad" person? Why are most of the things that he makes "very bad"? There's been a spate of bad press about Facebook recently, so that's primarily what's been coloring your view.

For people in other bubbles, they might believe that "Democrats are very bad people making very bad policy most of the time", or "Muslims are very bad people doing very bad things most of the time". Gross simplifications are easy to adopt, hard to abandon.


>That's your slanted viewpoint based on what you've been hearing in the news media.

Thanks for reading my mind, knowing me better than I could ever know myself.

>Why is he a "very bad" person?

I based my opinion mostly on the stuff I saw and keep seeing from him directly[1].

>Why are most of the things that he makes "very bad"?

Well, I came to the conclusion (and not just me) that his stuff is designed to be as addictive as possible and as invasive as possible. What he creates, deliberately, is the modern equivalent of cigarettes. Bad attributes in my book. His products are, in my humble opinion, a major contributing factor in the polarization of society and the decrease in civility and political culture all around the world (and no, I am not referring to Russians abusing facebook, I mean facebook itself, which attempts to increase addictiveness by highlighting extremes and conflict as a matter of "entertainment").

On top of that, he and his companies are using their power to lobby and push for stuff they full well know is detrimental to society but helpful to facebook, if it is regulation that is designed to strengthen facebook through regulatory capture, or if it is to bring "zero rated" facebook to developing countries in an effort to stomp out local competition before it even exists; Noblesse oblige.

And then there is the constant torrent of news about facebook's other indiscretions, e.g. regarding keeping the data they siphoned off secure, e.g. misusing data gathered under false pretenses like 2fa phone numbers, or e.g. how they treat the poor souls who they hired for pennies in Manila to do their content moderation, etc.

>There's been a spate of bad press about Facebook recently, so that's primarily what's been coloring your view.

I don't give much attention to the attention grabbing op-eds and pundits and sensationalized reporting. E.g. I think the entire Russia stuff when it comes to facebook is overblown, by a lot. But again, thanks for reading my mind.

By and large, I find CNN reporting despicable, MSNBC reporting even more so, but the uncrowned king of despise still remains Fox News, when it comes to TV networks.

>Democrats are very bad people making very bad policy most of the time

Since you bring it up, I believe that Democrats are mostly good people making bad policy most of the time. And that Republicans are, well... kinda split, with some real bad (and loud!) people and some good people with good intensions, and that Republicans make bad policy most of the time.

[1] e.g. https://www.businessinsider.com/embarrassing-and-damaging-zu... or e.g. the bullshit stuff he said in front of congress


Can we keep personal attacks off of HN?


>Inviting the government to arbitrate what qualifies as “harmful” speech is a legal and ethical minefield, while establishing a third-party system to do the same would amount to offloading corporate responsibility. There’s no reason to expect every platform to adhere to the same content policies, but every reason to want them to exercise judgment and accept accountability.

Is there a term for when journalists forgo making an argument about why something is bad in favor of just making a bunch of assertions + some negative sentiment words? Why is deciding what speech / UGC should be allowed in social media a 'corporate responsibility' such that letting the government or some 'impartial' NGO arbitrate this a bad thing? Why would you want every platform to have it's own content policy? Why do you want corporations to 'exercise judgement' and to be accountable (to the state? to journalists?) if you also don't want there to be some formal guidelines about what is or isn't allowed on social media?


That line was exactly where I lost interest as well.

This author is stating that Facebook should make it's own decisions about harmful content, and that facebook should ALSO be accountable to some other entity for those decisions (presumably the government, but who fucking knows based on the content of the article...). While happily saying that creating any sort of governmental guidance on that issue is "a legal and ethical minefield".

That's bullshit. You can't have your cake and eat it too. Either create guidelines that steer company decisions here and apply them, or accept that companies will make decisions that do not match the author's idea of morality.

You cannot judge a company without setting standards. You cannot let a company judge itself and expect them to make the decisions you think are correct.


Well, legal accountability isn't the only kind of accountability. This is Bloomberg we're talking about, after all. This article links to their piece criticizing GDPR, which they say is all wrong because the solution to privacy problems is "transparent pricing and more competition".

So I think maybe what they're saying here is that Facebook needs to exercise judgement and be accountable to the market.

I don't think I really agree with them, but it's a coherent, internally consistent point, and on-brand for Bloomberg.


Yeah I don't buy that part either. The problem is inherently hard and pushing the responsibility for policing content to a corporation is not okay. As a specific example, I was listening to a podcast I was listening to earlier that was dealing with how Facebook filters content (I think radiolab). I'm para-phrasing heavily since it's been a while since I heard the show, but I think I got the gist of it right.

- There was a violet attack in Mexico against a journalist by some drug cartel and a random body part (leg or an arm or a head or whatever) was chopped off and posted on social media by the cartel. The people protesting cartel violence picked this image up and were using this image as a part of their protest. Facebook's censors allowed it until the image popped up on some school kid's feed in America / England and all sorts of outcry later, it was removed from the site as a violet image.

- The boston marathon bombing happened and gruesome images of people lying on the floor with limbs strewn about started getting shared on the site. Facebook's policy at this point specifically said no violent images on the site so they got blocked by the censors. Media picked this up and raised an outcry on how facebook was censoring these images and basically someone high up said 'leave it up or else' and the images were allowed on the site.

This is clearly hipocrisy and there's no right answer here. Traumatizing school children with violent imagery they didn't even want to see is not great while the Boston marathon bombing was a significant political event in America and those images didn't deserve to get silenced.

This ends up being a question of free-speech and what sorts of content Americans were okay with. There are no right answers and I believe the govt. definitely should step in and provide guidelines here.

The disingenuous thing is using this as an example of how Facebook is okay with govt regulation while resisting any regulation around things that can hurt them like any number of their privacy mishaps, shady 3p data markets and ad tech in general.


This was a good episode and really showcased how the standards were forced to evolve by various stakeholders (protestors, execs, govt, etc).

I think I recall one them pointing out that a lot of challenges came from Facebook trying to be everything for everybody.

Moderating content so people don't get pictures of Boston Marathon gore next to their grandkids' photos is problem they made for themselves.


>This is clearly hipocrisy and there's no right answer here.

There is a lot of hypocrisy going on, but this doesn't mean there is no right answer or that the right answer is to invite government fucking censors to control everything.

There are pretty obvious things platforms can do:

1. Instead of opaque algorithms, give people control over what they see.

2. Either do bare legal minimum of moderation or create clear, exhaustive, stable and unambiguous rules for which content is and is not allowed on the platform. Once the rules are set, take a stand and abide by them in all cases.


>Why is deciding what speech / UGC should be allowed in social media a 'corporate responsibility' such that letting the government or some 'impartial' NGO arbitrate this a bad thing?

Because there is far more freedom to reject a corporation than the government (the NGO would depend upon how it interacts with the government). NGOs and governments are susceptible to bias, and while they may be less susceptible than a private corporation, they have far stronger enforcement of what ever bias they do have (once again assuming the NGOs are somehow working with government, if not then this only applies to government).

Take for example someone determining what is allowed to be printed in a book. Let's go back about two or three decades, so during the timeframe when most of us were alive even if not at all aware of politics, when social norms and even laws enforced views that we would find objectionable today.

Now, consider two possibilities. One, corporations decides what isn't allowed and thus chooses to ban any pro-LGBT content. Two, government decides what isn't allowed and thus chooses to ban any pro-LGBT content. Which is better?

In my mind, the corporate ban is far better because it is much easier for other smaller corporations to be started that will publish pro-LGBT content. As social norms change, corporate enforcement of morals give rise to more freedom.

Perhaps an even better example that is still relevant today instead of a few decades ago: according to the federal government marijuana has no medical benefits.

I'd rather lawmakers not be making any more laws that further restrict my freedoms.


> Why is deciding what speech / UGC should be allowed in social media a 'corporate responsibility' such that letting the government or some 'impartial' NGO arbitrate this a bad thing

Government arbitrating speech is what First Amendment explicitly bans. And I think there's a reason why it's the First - if the government controls the speech, then it controls election of the next government, and so on - so it pretty much ends the democracy or at least severely limits it.

NGOs can already arbitrate anything they want - as long as it is voluntary to listen to them. There are many NGOs that already issue their opinions about speech. As long as it is not mandatory - it's fine and already exists. As soon as it starts to be mandatory - see above.

> Why do you want corporations to 'exercise judgement' and to be accountable (to the state? to journalists?) if you also don't want there to be some formal guidelines about what is or isn't allowed on social media?

Corporations already exercise judgement and always had. But so far there was freedom for every corporation to exercise whatever judgement they have, either good or astonishingly bad (as in case of Facebook) and for the users to choose if they want to deal with this corporation or another one. Introduction of the government in the equation kills it.

> there to be some formal guidelines about what is or isn't allowed on social media?

In the US, one of the basics of limitations put by the people on the government is that government, outside of the very narrow task of preventing imminent crime, does not get to say what is or isn't allowed to be said.


I don't doubt regulation would help Facebook in some respects, but (in usual Bloomberg fashion) this article is also stretching some things a bit. For example:

> Start with harmful content, which Zuckerberg defines as “terrorist propaganda, hate speech and more.” Facebook would prefer that someone else decide what constitutes such content and should thus be taken down.

I agree with this. Having a private corporation that's invested in engagement, which is also headquartered in the US (very far geographically and culturally from some of the places it moderates) be in charge of defining what's right and what's wrong, is and has been a recipe for disaster— let alone the implications it could have around the sovereignty of nations.

> But it’s hard to see how this would benefit anyone but Facebook.

Excuse me, what?

> Inviting the government to arbitrate what qualifies as “harmful” speech is a legal and ethical minefield, while establishing a third-party system to do the same would amount to offloading corporate responsibility.

Isn't this the government's job though? To define what is a crime/wrong and what isn't, and to deal with the moral aspect of legal codes? Isn't this why we have laws? I don't think anyone is vouching for "big brother" type government control here, but governments always have a principal role in civil rights, inclusion and the moral development of a nation. It's part of their job, and the reason people care to have leaders with strong moral compasses as public servants. Has this view of government been lost in the US?

> There’s no reason to expect every platform to adhere to the same content policies, but every reason to want them to exercise judgment and accept accountability.

I'm there's things that should clearly be blocked or allowed. But the gray areas are what's being discussed here, and honestly I don't think that should be Facebook's job (or any private, American, for-profit corporation for that matter).


It has never been the US government's job to decide which speech is 'harmful' and should be silenced. The First Amendment prevents that. (Of course, some things like slander are illegal, but that's nothing like the 'harmful speech' Facebook is talking about.)


It is entirely the US government's job to decide which speech is harmful: it just takes the view that most speech isn't harmful, and should be allowed. The First Amendment is part of the US government's decision on speech: its view is that most speech is more good than bad, and thus is protected. When it does decide that certain speech, like libel, is harmful, it bans it. It even can ban speech in specific contexts! For example, in Brandenburg v. Ohio the court upheld that speech "which would be directed to and likely to incite imminent lawless action" could be banned. So, saying an inflammatory thing privately to your friend, for example, could be fine, but boosting that post to millions of people could be viewed as "likely to incite imminent lawless action" such as a riot.


> The First Amendment is part of the US government's decision on speech

This is just flat out incorrect.

The Bill of Rights was the people saying "You want a US government ruling over the several States? Here are the rules for what you're allowed to do." The US gov't never made a decision on speech, they agreed to the terms of "We, The People", full stop. You're acting like it's the opposite.

This is also why the gov't can't just rewrite the 1st amendment willy nilly—all of the US government's legitimacy with the American people comes from respecting the Bill of Rights. Stop respecting those rights and a rebellion is right around the corner.

Historically, the States had to approve the Bill of Rights (and the rest of the new Constitution) in order for it to come into effect. It's definitely NOT the US gov't "deciding", and it never was. All of the power resided in the States.

Interestingly, the "Federalists" that (re-)wrote the Constitution were mostly opposed to the Bill of Rights—that's the contribution made by the anti-Federalists to the US Gov't as it exists today.

Isn't it funny how most citizens treasure the Bill of Rights more than anything else in the Constitution? That makes it pretty clear which "side" had the well-being of the people in mind.


> Of course, some things like slander are illegal

Specifically, to prove slander, the defamed party has to demonstrate intent and damages. Same with fraud. Both of these are cases of "your right to swing your fist ends at the tip of my nose."


Not in many non-USA countries. Just damage and negligence is enough in many other English speaking countries [1].

1. https://en.m.wikipedia.org/wiki/Defamation


It also depends if it's a tort or criminal.


I think there's a bit of a disconnect when it comes to "harmful" speech, and "illegal".

The US government isn't going to decide what is "harmful" at least not for long as a legal challenge would quickly do away with any such rules.

In the end the platforms have to decide for themselves and police themselves unless they want to be 4chan or whatever the next "post anything" kinda place is now a days. Right or wrong they're the only folks who can do it.


The only problem I see here is Facebook is offloading the 'hard problem' of algorithmically dealing with such massive data that requires top-notch machine learning algorithms to be able to detect sentiment at 'Facebook level Scale' to a massively more technically inept party. Look up how much money it cost to do the ObamaCare website, then tell the same party to develop an algorithm that Facebook, one of the biggest technology company's in the world, is having a hard time solving. This problem is only going to get harder and harder as the amount of data created is rising.


> I agree with this. Having a private corporation that's invested in engagement, which is also headquartered in the US (very far geographically and culturally from some of the places it moderates) be in charge of defining what's right and what's wrong, is and has been a recipe for disaster— let alone the implications it could have around the sovereignty of nations.

What? If Facebook decides it, it's a threat to sovereignity of countries other then the US, but if the USG decides it, it isn't?


> I don't think anyone is vouching for "big brother" type government control here, but governments always have a principal role in civil rights, inclusion and the moral development of a nation.

Lol, what does this even mean? You want a Theocracy with Zuck writing the scripture? Are your views tainted in any way by Facebook Incorporated?

The government and their role in defining "harmful" has been defined already. It's called the First Amendment. Might want to check it out.

https://en.wikipedia.org/wiki/First_Amendment_to_the_United_...


Who’s government?


Each person's, hopefully.


All of this moderation stuff feels like a "damned if you do, damned if you don't" situation for Facebook.

People have been crapping on Facebook for making judgement calls on what's allowed on its platform. When Facebook does so without government backing, people get up-in-arms saying that it should be the government's role to decide free speech. But when the company goes and requests that government start doing that, people get up-in-arms about how it'll be "regulatory capture".

I can't predict what the outcome of all of this will be. I can however, predict that there will be an angsty article (Bloomberg? New York Times? Wall Street Journal?) decrying how Facebook is awful, evil, and incompetent.


I feel like this entire opinion piece is just a knee-jack reaction against anything what Mark Zuckerberg has to say.

I was giving it the benefit of the doubt until it started talking about how bad GDPR is. GDPR has some short-comings, but it's a bit less black-and-white than what this article makes it out to be.

Maybe what Mark Zuckerberg saying really is self-serving but this article doesn't make any good arguments for it. It's just making assertions which it doesn't back up. I don't know if it's just me getting old, but I started noticing this kind of intellectual dishonesty more and more recently in the media and even here on HN.


I think a lot of psychological research confirms that people start with the opinion or position and find evidence (any will do) to back it up, not the other way around. So it's not surprising it shows up in lots of debates between people.


".. forcing competitors to make their own data exportable to Facebook would in all likelihood benefit Facebook"

What competitors is the author even talking about here? There are, as far as I'm aware, essentially no competitors left to Facebook anymore.

That being the case, I'm not sure that statement is correct at all. On the contrary, it seems like if FB were required to offer an easily exportable data format, that other services would pop up overnight to try and lure people away from FB onto their (hopefully) more privacy conscious platforms. It would also lower the bar for people to make the switch to something else, as they know their friends can switch just as easily without losing their data.

I agree with the rest of the points of the article, but I don't see how data portability can be anything but a net positive for the consumer.


If "data portability" were interpreted widely enough, it would mean that people using other social networks could have a "proxy" Facebook account which is automatically synched with their main (non-Facebook) account.

Ideally, any posts that your Facebook account could see would be fetchable by your non-FB account, and anything you post on your non-FB account would be re-posted on FB for you.

This way, as far as your friends are concerned, you are a normal Facebook user, but you would never actually browse the site (or use the app). That would allow competition in how the UI works, but also less ad revenue for Facebook (and more engineering work, which they would presumably claim is unfair).


There are several active competitors depending on how you define the market. WeChat, TikTok, LinkedIn, Pinterest, etc.

https://en.wikipedia.org/wiki/Social_networking_service#Larg...


The 3 key features that most people I know who still use Facebook are Photo Sharing, Event Organizing, and Messaging. The things you listed do one or two, but I don't think any of them do all 3.


If company A is selling hot dogs and ice cream, and B is selling hot dogs and C is selling ice cream, does that mean A has no competitors?


WeChat does all three of those, plus payments and food ordering. Basically everything Facebook wishes they could do, WeChat does it for about 1B people.


WeChat doesn't have a significant presence in Western markets, and Facebook doesn't have a significant presence in China. They don't really compete with each other, even though they provide a lot of the same features.


I'm pretty sure WeChat does all 3...


By that standard you could claim that the Honda Civic has no competitors because no other vehicle has its particular combination of features.


> That being the case, I'm not sure that statement is correct at all. On the contrary, it seems like if FB were required to offer an easily exportable data format, that other services would pop up overnight to try and lure people away from FB onto their (hopefully) more privacy conscious platforms. It would also lower the bar for people to make the switch to something else, as they know their friends can switch just as easily without losing their data.

The problem with this is that it presupposes a certain data-format, that doesn't allow for innovation. How exactly would you export facebooks data to twitter or vice versa? For a start, facebook doesn't have a character limit, twitter does. Twitter threads can be nested infinitely, facebook's can't, etc.


There are still ten or so of us on Diaspora.


Dating sites


It's almost comic villainy - like, everything they say, do, propose, etc. is always evil in some way.

Can anyone on the inside give some insight into current morale? Is the constant deluge of negative press cutting through the indoctrination or nah?


> It's almost comic villainy

I think that feeling is the group narrative talking. It's a thing we (HN, reddit, etc) choose to do together, rather than a specific fact that's true or false.

Former Facebook employee here: One thing that you don't appreciate from the outside, is that FB employees hear more FB criticism than you do. (This is just true of anyone working for any company.) And one of the side effects of hearing more, is that you hear a lot of very wrong, very poorly thought out stuff. So it does get easier to sit out from a group narrative like this, because the parts that are wrong or misleading jump out at you.


It’s the same as it’s always been, and the same at most of the big tech companies: Everyone believes they are changing the world for the better while extracting value through some kind of exploitation.

Morale is high because paychecks are high. How much you get paid is the key indicator of how much you are worth as a human being. That’s the game we play.


> It’s the same as it’s always been, and the same at most of the big tech companies: Everyone believes they are changing the world for the better while extracting value through some kind of exploitation.

I'm curious what your experience is with this? I've worked at Google and AWS and I don't think I've met a single person who thought they were "changing the world".

The closest it ever got to that was when I once remarked to my lead that I wish I felt like the work I do mattered to which he replied something along the lines of, "We play but a small part in the grand movement that is the progress of technology, but don't forget we do play a part."

Pretty much everyone I know just does their job because they want to retire or support their families. This is also the same for almost every employee I've seen in other sized companies.

> How much you get paid is the key indicator of how much you are worth as a human being. That’s the game we play.

Some people may think this way but it's a choice to surround yourself with those kinds of people.


Yeah, it's all a drop in the bucket. Fifty years down the road, nobody will care about Facebook's transgressions of today.

> How much you get paid is the key indicator of how much you are worth as a human being. That’s the game we play.

You can look at an ape in the jungle as getting the most food. Is that the worth of that ape? To the apes, maybe so. To us, it's just an ape.


> while extracting value through some kind of exploitation.

In your view, are there ways of "extracting value" that are not "some kind of exploitation". Is "changing the world for the better" incompatible with "extracting value" somehow?


From what I've heard, they have nice cushions of dollar bills to block out any negative thoughts ..


I really dislike comments like these, and I find it dismaying that they've gained currency on HN recently. Almost every article about Facebook or Google has some variation of this comment.

> It's almost comic villainy - like, everything they say, do, propose, etc. is always evil in some way.

The problem I have with it is that you can replace the meaning of "they", and have this comment "work" for any number of biased echo chambers. Go ahead and replace "they" with Facebook, Google, Obama, Trump, Exxon-Mobil, scientists, billionaires, politicians, Internet trolls, the media, ad nauseam.

If this sentence can work for any of these scenarios, verbatim, then what's the point of uttering it?

To bring this back to the issue at hand: is there anything Zuckerberg could say that would make you not respond this way? And if you can't come up with anything, are you sure you aren't completely biased against Facebook such that nothing they can do will ameliorate you?


I agree with your premise, not just from the ethical sense of wanting to treat others as human beings, but I think when I manage that I make a more compelling argument.

That said, let me try to answer why you're seeing this pattern.

Facebook has a business model that works for certain strategic choices, and doesn't work for others. A big one is they don't charge a subscription fee to users. That puts hard constraints on how they generate revenue.

> is there anything Zuckerberg could say that would make you not respond this way?

That's why it's nigh impossible that Zuckerberg will come along and announce, "we've ended our policy of treating users as the product."

Personally, this was one of the reasons I've been happy to work at firms that sell specific products and services. It doesn't guarantee they'll behave well, but the fact that customers can take their money elsewhere does tend to keep them grounded in the long run.

I think most social media, for that reason and others, has a bad business model. I'd like those businesses to fail to clear out space for alternatives to be built. (And they're not villains working there, so I'd also hope they all land on their feet and find new jobs.)


> I'd like those businesses to fail to clear out space

There's practically zero chance that will happen - either that the existing companies will "clear out" or that the same role would be filled by anything with a different business model. In the real world where Facebook will continue to exist, is there anything that the company or its employees could do that would meet with your approval?


> There's practically zero chance that will happen - either that the existing companies will "clear out" or that the same role would be filled by anything with a different business model.

Are you going to justify that claim? It's been made repeatedly about prior companies that were too big to fail, especially tech giants, and they all fall or become obsolete.

> In the real world where Facebook will continue to exist, is there anything that the company or its employees could do that would meet with your approval?

I gave an example: "we've ended our policy of treating users as the product." You're repeating this rhetorical question without addressing any of the points I made or adding anything new.


> they all fall or become obsolete.

I was in this industry through all of that. It takes a long time for a company to fall that far. Yes, given enough time anything can happen, but I'm not interested in tautologies. In any time span relevant to this conversation, it's not going to happen. If you want to claim otherwise, that's your burden not mine.

> "we've ended our policy of treating users as the product."

OK, so I guess there is that one unrealistic possibility. Even if Facebook moved to a subscription model, "treating users as the product" is not a phrase they'd use for what came before. So you still haven't indicated that you'd be satisfied by anything that could actually happen.

> You're repeating this rhetorical question

It's not a rhetorical question if there are multiple valid answers, and repeating a question is still better than repeating statements based on counterfactual assumptions. If you want to play "high school debate champion" you'll have to try harder.


> In any time span relevant to this conversation, it's not going to happen.

If they made that promise to investors, they could go to prison.

> OK, so I guess there is that one unrealistic possibility.

You're demanding that everyone else has to show that they're ready to abandon their principles, lest you judge them instransigent and unreasonable. What changes have you made to your worldview, without seeing something significant change?

Probably never: your views are shaped by observations of concrete realities and reflections on the consequences of those. And unless those realities change or you get new information, you couldn't come up with a coherent set of new views if you wanted to.

> It's not a rhetorical question if there are multiple valid answers

It's still a question intended for rhetorical effect. I wasn't saying that makes it invalid, it's just annoying if you're repeating it while dismissing the answer given.


So let’s not talk about anything at all then, since everything can surely be mad-libbed. It seems the one working on vague generalities is your statements, not his.


To use the parlance of our time, a lot of people are becoming “woke” to just how morally bankrupt our society has become. And while these sorts of companies have always existed, what’s new is this layer of “saving the world” morality that allows people to feel good about it.

It’s like being a henchman, only you’ve managed to convince yourself it’s honest work. And maybe now it actually is.


I've said it before and I'll say it again. Just for myself, of course. Things have been done that I might not have agreed with. It does hurt when the "Facebook is evil" meme gets thrown in my face, especially from so-called friends. So yeah, morale takes a hit, but it's not clear how my leaving would make it any better. Don't you want for there to be some good people at Facebook? Or do you deny the possibility, as if people who were good at their last company magically become bad the moment they accept the offer? Believing in no good outcome except extinction is a working definition of hate, and I have no time for that. I and literally thousands of others are here, solving technical problems, trying to influence the non-technical ones, and if you think we shouldn't be than that's just counterproductive by any standard.


Ah, the first--or maybe second--week of freshman ethics. Change from the inside, or from the outside...

Take some responsibility. Are you evil for signing on? No, but let's not pretend they're welcoming you because you're going to shine a light on unethical practices. Are you evil for staying? That's a judgment call you have to make in your own heart-of-hearts, measuring what you've enabled and contributed, who you've enriched--against the moral influence you've exerted (not to mention the questionable influence you've empowered.)

> I and literally thousands of others are here, solving technical problems, trying to influence the non-technical ones, and if you think we shouldn't be than that's just counterproductive by any standard.

There is a standard. I offer you a baseline: if you're solving technical problems and you're systematically failing to influence the non-technical ones, you shouldn't be there. I give you the benefit of the doubt. Your failure is likely measurably less than systematic.

But enough so? Bear in mind the alternative is not yelling at Facebook from the outside, it's building a better world, a better product, a better community...elsewhere.


> the alternative is not yelling at Facebook from the outside, it's building a better world...elsewhere.

And how many of the critics are doing that? How many are making a positive moral difference at their own companies, however good or bad those companies already are? I posit that moving forward at a "bad" company is more valuable than standing still at a "good" one. In my group at Red Hat, generally regarded as a "good" company, our project plans were influenced by demands from some pretty shady customers - big banks, government agencies, even a Russian propaganda agency. At Facebook, my work supports data scientists who are detecting and eliminating that same kind of propaganda. But I'm supposed to feel worse about myself now?

You talk trash about freshman ethics, but that's exactly the kind you yourself are engaging in. I was in fact a philosophy major long ago (though more logic and metaphysics than ethics) so I should know. ;) It has been over thirty years since I found such simplistic arguments even slightly amusing.


so not only are you just following orders, but when people point that out to you, that makes them shitty because it hurts your feelings to know that you're selling out?


No, it doesn't make them shitty. If they don't have any suggestions other than extinction, they were shitty already.


I'm sure morale is fine. They've hired for this type of culture.


The only think Zuckerberg is missing is a thick mustache he can twirl. People gave him a pass for his "dumb fucks" quote because he was a "kid" when he said it, but look where we are now with Facebook and the state of the internet in relation to it.


I have zero faith in Facebook as an organization doing the "right" thing. Their machine is fueled by highly targeted advertising. If you erode their targeting capabilities, then marketers will be quick to invest elsewhere. Funny enough, with ITP 2.1 and potentially a less cookie friendly Chrome, they are increasing FB's value prop to marketers.

I am going to butcher this, I can't articulate it how I think it... Why do we need regulation for social network that is only as strong as its network. Is the network effect so strong users can't or will not leave on their own accord? I left FB and instagram a few years ago. It is just so funny to me that this entity (a people aggregator) is powerful but all we need to do is leave it. How many people in your network have to leave before it loses its value? Networks expand exponentially. Is the opposite true?


Why am I not surprised? At this point I have no trust in that company. It would be very unwise to listen to them, and very unfortunate for someone like Senator Warren and those who want to break them up to adopt their policies.


Most government regulation benefits large companies at the expense of small companies, because large companies have more resources to make sure they are in compliance and have the lobbyists who help write the rules. So it's no surprise that new regs here would be helpful to big companies and harmful to small ones.


I'm not even upset that Facebook is lobbying for their own self interest. I just wish such a blatant attempt would have a lower chance of success.


You just quashed your own fears. Fortunately this is going nowhere because it's so blatant.


That was said about many deeply unpopular pieces of legislation over the last few years (the Friday Night Tax Bill comes to mind).


The problem is blatant isn't a guarantee of not doing something - nor being a spectacularly bad idea.


How are you so sure about this?


I'm not but Zuck isn't winning any popularity contests in congress or among the general public these days. Odd time for him to publish this piece IMO.


It may be blatant to us but America has demonstrated an amazing ability to buy into delusions lately.


To be fair, Facebook has said since the beginning that regulation had a strong possibility of making competition harder and entrenching established players.


To be fair, Zuckerberg has said anyone who trusts him in any way is an idiot.



Look, most government regulations ultimately favors the incumbent while making it much harder for new entrants to challenge the large organizations. Thus creating less competition which ultimately solidifies the status quo.


Facebook knows regulation in coming and is transitioning from being anti-regulation to regulation on their terms (ie. regulatory capture). Unfortunately, I don't have much faith in our government to properly regulate technology, for one Congress seems totally unaware of how Facebook works ("Senator... we sell ads") and judging by the diasaters that were the great recession and the Boeing crashes, even a well known "strong" regulator isn't enough to curtail the biggest companies


There's no need to "regulate the internet", we only need to regulate social network operators. Regulation should look like this :

* no ban of any user whatsoever * obligation to provide a public unlimited read only acess to standardized api for all the data * no private messaging

and let a million better Facebooks emerge. The content policing problem will solve itself


This seems like an attempt at classic regulatory capture.

"Regulatory capture is a form of government failure which occurs when a regulatory agency, created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate the industry or sector it is charged with regulating." [0]

[0] https://en.wikipedia.org/wiki/Regulatory_capture


Alternative take: this is an attempt by Zuck to push regulation preemptively in order to counter the recent bad press. Better that the gov't create rules affecting all players (all companies affected, including FB) than to continue having just your own company in Washington's crosshairs.

Even if he doesn't get to dictate the regulation (thus not achieving regulatory capture), it would still accomplish the mission of keeping the business alive. They will continue selling their ads and making money, regulation or no regulation.


It's possible for both of these to be true. Supporting regulation - particularly known regulation that it pretty similar to what you have to do anyway to compete internationally - both takes the PR crosshairs off your company and creates a barrier to entry for any potential competitors.


it's pretty transparent and gross to watch. He's just as clueless about how he's seen as a CEO as he was as a Presidential tire-kicker.


Eh, it's pretty tough to communicate something really concrete in an op-ed.

If he specifically described what a transparency report would say, 99% of hot takes would be about how it's wrong or whatever. People's brains would turn off and the opportunity for a conversation would be squandered.

There's an opportunity here for more sincere activists to promote meaningful transparency, because now he's put it on the radar of otherwise low-tech-literacy audiences.

For me, true transparency is being able to look at code. If your code is interpretable, and it looks okay, it is okay.

When it's difficult to interpret, like with a collaborative filtering algorithm used to select newsfeed items, an independent group ought to be able to query the "baked" system or measure user behavior and come to its own conclusions. I wouldn't call this a "transparency report," it's more like an "independent analysis."

It would take a while to determine what questions to ask. But to actually answer them would take an afternoon: a database connection, really. After all, they have incredibly comprehensive user behavioral data, so you can pretty much answer anything you can think of.

There's no cost burden or competitive disadvantage. Their entire tech infrastructure is built around answering user behavior questions easily. And most of the value is tied up in the data, not the models.

The problems they're experiencing are related to their particular implementation of collaborative filtering. If outside groups could see that code (at least), it would be easier to say, run a simulation, and demonstrate what kind of user behavior (or enemy action as it were) makes 10% of people miss 99% of "benign user content" and see 1% of enemy action content.

Maybe you don't believe it's implementation related, and you think it's something innate to social media. Well, what if the newsfeed just showed content randomly? Clearly that would bury enemy action content, just because it's so relatively infrequent. It's important not to be too dogmatic with this sort of stuff, because it will obscure your ability to reach across the aisle to stakeholders like Facebook and get them to do stuff in a sincere way.

So at its most basic level, sharing the code can't possibly lead to regulatory capture as you describe it. It would be more advanced than the status quo, which is, "Whatever the hell Facebook wants you to hear."


I'd like to add to this that regulatory capture isn't some benign cancer. I read a really good piece on the history of the Venezuelan economy recently, and a significant factor in their failure as a liberal state and decline into socialism was regulatory capture. Capitalism only works when monopolies are fought/busted. Regulatory capture does the opposite, and in turn tarnishes people's impression of capitalism.

I think this should really give a strong impression. Venezuela went from one of the most prosperous nations in the world to what is likely now the worst place in the world for a poor person to live in, and it only took a few decades. There's no reason the US is immune to the same fate, and it started long before socialism with things like regulatory capture.

"The Venezuelan economy has a profusion of restrictions and impositions on businesses, particularly in the areas of labor and new business formation. Only the big, well-established companies can afford the high regulatory burden. This is consistent with George Stigler’s (1971) insight on regulatory capture, which abets government creation of monopolies stymieing competition. Further, the high levels of informal or underground economic activity in Venezuela are consistent with existence of a high regulatory burden (see Diaz and Corredor 2008). Unsurprisingly for such a setting, corruption is rampant." [1]

[1] https://econjwatch.org/articles/venezuela-without-liberals-t...


[flagged]


Genuine question: Is it just me or does this reddit-style mugging for karma not really belong on HN?


How would you describe a "reddit-style mugging for karma"?

I didn't find the post you're referring to as having added anything additional to the conversation and I also didn't find it entertaining, which I believe was the poster's goal.

However, I wasn't struck with some kind of "this is a bad thing from other platform" vibe.


I think “decorum” around here is to strive for some kind of unified voice similar to The Economist. I’ve tried some humor or exaggerated voice and it is rarely appreciated with upvotes.

One thing I have learned is HN readers like Weekend at Bernie’s—-where my tongue in cheek defense of the film garnered the most comment upvotes I’ve ever received.


The main problem with humor on HN is that it's just too easy to find humor anywhere on the internet, but there's no other news site where you can critically discuss the news itself.

Also on Reddit it's frustrating when I'm writing something informative and it gets hidden between the memes and the jokes, so I tend to stick to jokes there.


Why do you think this is an attempt at humor? I would say this is what is often referred to as a thought exercise.


That’s an accurate description, and, as someone tempted to go for humor, I think HN’s constraints are valuable for me.

I’ll often have a knee-jerk post that’s either funny, caustic, or redundant. I find things go best here (with some ups and downs) when I focus on providing genuine info or insight that others aren’t likely to already possess.

This pressure is a Good Thing imo.


I would agree with you - mostly when I comment here it's about something that I have direct, professional experience with. Either education, rural living, low income folks, or farming.

Otherwise I tend to just ask questions to help me understand the article from people who actually know.

But my first reaction to most articles is a dick joke. That belongs on reddit, not here.


I upvoted that comment. And I'll upvote this one for mentioning it. That's a great movie.


Not supposed to do this, but I'll spend those upvotes: lol.


🆒 Upvoted purely on the Weekend at Bernie’s reference.


How did you get an emoji on HN?



⭐ I assume via the unicode emoji set?


I tried a copy-paste from KCharSelect, but it doesn't work.


Economist, you wish.


>How would you describe a "reddit-style mugging for karma"?

Normally an extreme appeal to emotion. In this case excessive use of caps-lock and a fictional narrative that doesn't substantiate itself.


I think it got a valid thought across. It works as a sort of shorthand. Would have been 3x longer in essay form.


> It works as a sort of shorthand. Would have been 3x longer in essay form.

beering’s sibling comment made essentially the same point in direct expository form, and was much shorter, so I disagree with the suggestion that that rambling narrative is any sort of shorthand.


I appreciate your comment, even if you didn't mine.

Thanks.


I get what you're saying. The main reason I hate 'The martian' was that to me the whole book sounded like the above post. Like it was written by someone on Reddit.


It doesn't belong anywhere.


Genuine Question: Does this thought process of summarizing the shenanigans we see in the world from entities like FB not serve to drive home a quick idea, without having to read pages and pages of text to get the point across not have a place.

I have been on HN for more than a decade. I tire of the higher-than-thou bullshit where a dialog of context for various subjects is shunned from some taboo on delivery of the premise.

So I reject your position.

Would you like me to cite things that I know personally about FB, or cite every single compacted reference made in that comment at great length so you can feel superior in your understanding that you may not have?

I mean -- FFS - understand that there are people who want to make a condensed comment on shit that happens in the tech field but I don't want to write a diatribe wit ha bibliography on why "because screw, them, that's why" is a salient summary.

(See: on HN "google deep-sixes usenet" on FP today)


I think the nuance is actually critically important, because degrading people -- even powerful ones -- down to caricatures robs us of the understanding we need to solve these problems.


Exactly. If it were so obvious then we wouldn't be in this situation in the first place. There are reasons why things got to be this way and we have to understand them to progress.


Caricatures do not rob you of anything. They emphasize grotesques or background features to the point of extracting them from invisibility.


> Does this thought process of summarizing the shenanigans we see in the world from entities like FB not serve to drive home a quick idea, without having to read pages and pages of text to get the point across not have a place.

Maybe for some people. I'm honestly not at all sure what you hoped to communicate with your previous post.


Your post has an extremely pretentious tone, use the words everyone else uses if you want to be clear. You don’t sound smart using words like salient or diatribe when the simple words can do the trick.


[flagged]


I don't see that too much, to be honest. If there is any sort of corporate love here, it feels more like it's Google that enjoys it, not Facebook.


Yeah - there are many comments about Facebook's lack of ethics on this site. I'm betting more than any other large tech company.

Hell there was even a poster in a different facebook story talking about how it was fun to tell headhunters they refused to apply to facebook because of ethics.


So data portability is now bad, because Facebook is suggesting it.


Here's a regulation that I'd like considered: a default licensing scheme for personal data, that stipulates a 50% revenue share with the user of any money made from the use or sale of their data. The regulation would stipulate the scheme can only be renegotiated in a way that would be impractical to implement at scale (to avoid easy circumventions through TOS and click-throughs).

This would hit the privacy-invading companies where it hurts, in their pocketbooks.


Profits based on the sale of data has been discussed a few times on HN. However, the price for it is so low on a per user basis.

That said, there should be a law on the books that disallow the sale of data between companies. I'd wager this is the bulk of where Facebook makes it's money. That's where we should be targeting with legislation.

The use of data should be fine. As the end-user or end-company never actually get their hands on the data. They just use a UI with categories and filters and put up ads based on the options available.


Facebook makes its money from selling ads. You can see this in its revenue report, https://investor.fb.com/investor-news/press-release-details/...

Of course, these ads are highly targeted using all sorts of personal data, but they aren't actually selling the data, as that's also just less profitable.


I know that.

But is there a distinction in advertising between:

1) Allowing users to use the data for making ads.

2) Selling the data to companies such as Apple for other means.

There isn't any visibility. This is what I'm trying to get at.


From another report,

>> Mobile advertising revenue – Mobile advertising revenue represented approximately 91% of advertising revenue for the first quarter of 2018, up from approximately 85% of advertising revenue in the first quarter of 2017.

In particular, Facebook makes its money from mobile ads


> However, the price for it is so low on a per user basis.

A user won't see much benefit in data profit sharing, but the companies will definitely see the costs. The effort to set up payout channels for every user would drive them to look at other revenue sources.

Plus, if all user data comes with a 50% cut, and their biggest income is based on user data, that's significant to the company.


> Here's a regulation that I'd like considered:

Well, I would rgeulate facebook and google so that they are required to offer their services to me at production cost + some decent margin, while guaranteeing that if I pay the price, my data is not monetized or shared in any other way to third parties (including using my data to train any kind of machine learning models)

Also, fb, ig and whatsapp should be split into separate companies. Google web search might as well be separated from everything else.


> Facebook exacerbates poisonous politics by creating filter bubbles of like-minded partisans, spreading hoaxes and inaccuracies, inducing anxiety and paranoia, rewarding clickbait and outrage, and so on.

An interesting example is the yellow vest protest in France. The mass media heavily emphasized the violence of the protesters.

But, on social networks, people mainly shared videos of violence from the police. (ex: https://twitter.com/Albirew/status/1104799813076508673 )

Another interesting news that was discovered thanks to social networks was that the public TV photoshoped a picture of the protest, changing a text from "Macron get out" to just "Macron" [1]

The contrast between those two sources of information is striking.

Both top-down mass media, and bottom-up social networks bring interesting news, but both are biased and inaccurate.

[1] https://www.lesinrocks.com/2018/12/16/actualite/quand-france...


> But, on social networks, people mainly shared videos of violence from the police

It was actually both.


If Facebook wants to get ahead of and control privacy regulation, Zuckerberg needs to create an arrangement with a third party to push his agenda. IE - a charismatic CEO from another company that has a good public image. If it was done correctly, he could be pushing the exact same agenda and be getting kudos instead of (completely justifiable) skepticism. Politicians will be especially susceptible to this kind of proxy-ism.


"Facebook would prefer that someone else decide what constitutes such content and should thus be taken down. But it’s hard to see how this would benefit anyone but Facebook."

I can actually get behind this. As I said before, I'm wary of Fb and Google becoming judge and jury of what is or isn't acceptable speech.

It's against their will now, but they may well grow to like it.


Maybe Facebook should convert into a Benefit Corporation. There's a petition for this here: https://www.change.org/p/mark-zuckerberg-convert-facebook-in...


All of the constant time in front of Congress is likely affecting their bottom line. The constant threat of regulation or breakup is getting to the point where it is harming the company more than any regulation would. It makes sense for Facebook to get in front of it.

That said, obviously they shouldn't get to craft their own regulation. And any regulation setting the boundaries of an industry risks locking in the way it works (think how how awful dealerships are in the US). If we craft a set of rules for Facebook, we will almost guarantee that it continues to exist.

My (very weak) opinion as someone who hates FB is that we keep the status quo - the constant pressure and hoop jumping and negative media cycles seem to do quite well at driving users away.


I hate zuck as much as the next guy, but this writer is being dishonest.

Data portability is the first step in allowing other companies to compete with FB and takes away a huge advantage they have (switching costs). Call a spade a spade.


> On privacy, Zuckerberg has become an exponent of Europe’s behemoth rulebook, known as the General Data Protection Regulation. That approach is badly flawed in its own right: It undermines innovation, burdens companies, annoys users, and offers few benefits.

Outside of the authors' views on Zuck.co, this is a myopic view of GDPR. In reality, GDPR empowers, rather than annoys users, and the regulations actually gives EU citizens some degree of control over their data - which is of increasingly huge benefit, especially their interactions with Orwellian companies like Facebook.


Well, I am sure there are some aspects which annoy users (e.g. having to search for the opt-out button on every page you visit), but AFAIK those are actually bad implementations. Sure, some things could have been done better, but ultimately the GDPR is a huge step in the right direction.

It is good this article is posted as 'opinion' as it is quite opinionated (doesn't reflect my opinion though).


According to the article ‘Zuckerberg deserves a hearing’. He didn’t show when called before the UK parliament, and if he doesn’t appear when it suits him, does he deserve a hearing?

https://www.google.co.nz/amp/s/www.independent.co.uk/life-st...


I think you should show up at my home to justify yourself for this comment in person, otherwise I don't think you deserve a hearing either.


That’s not even remotely similar. His company interfered with an election and was asked to explain. He didn’t show. Why was your comment made with a throw away?


On the surface this isn’t shocking at all. And yet, when was the last time anyone really tried to compete with Facebook? Sure, they’re in the ad business like everyone else, but no one really competes with them on product.

Yet whenever we hear about Facebook culture, it’s about move fast and break things. Is that really a culture equipped for coping with regulatory compliance? In my experience, the answer is no.

So I’m really left scratching my head on this one.


You can't compete with Facebook unless you commit the same invasive acts that allowed them to collect their data in the first place. Nobody is going to willing enter their phone-number, address, all their friends, all their contacts, all their meta data, at world-scale, anytime soon to some new start-up.


That's kind of my point. If no one else is willing to compete, why bother advocating regulation that makes your incumbent advantage harder to maintain?


The article starts by saying that inviting the government to arbitrate is a legal and ethical nightmare, and establishing a third party is offloading corporate responsibility. It finishes saying that it's rarely a good idea to outsource regulation to the enterprise that needs it. So which is it? Gov regulation is no good, and self regulation is no good!


I don't think anyone who built a network that exploits people's profiles for profit, should be setting privacy rules proposals.


"Zuckerberg deserves a hearing, but it's rarely a good idea to outsource regulation to the enterprise that most needs to be regulated."

This sentence implies there are instances (albeit "rare") when it is a good idea to outsource regulation to the enterprise that most needs to be regulated.

Can anyone cite a few examples?


I'm not a Facebook fan but nor do I agree with this article, the analysis seems pretty shallow. Zuck's rules seems like broadly positive things, and like they'd benefit the competition more than this article lets on.

Harmful content => Any government regulation of harmful content will probably hurt small players more, since it's a big fixed cost of moderation. But everyone will realistically pay that cost anyway unless they're something decentralized like Mastodon.

Elections => Nothing to do with size of platform I think, they're just saying that it's disingenuous since filter bubbles are more important. I disagree, election financing is super important and as another poster said filter bubbles are as old as print.

Privacy => It says that GDPR is terrible for small companies, but without any evidence (or spec of "small") I think for anyone the size enough to be looking to challenge Facebook it's not a big deal, and broadly I like GDPR. "Don't store people's data unless you need to, and ask permission". Not a bad first effort to trying to fix surveillance capitalism.

Making data portable => This definitely, definitely would hurt Facebook more than help it. Noone is sitting on Instagram thinking "I'd swap to Facebook if only it was easier". They're the monopolist with the inferior product + network effects, that narrative makes no sense at all.


I had the same sentiment. Not to be harsh against Bloomberg, but it for a few of these the journalist doesn't know what he is talking about.

The "sophisticated tools" to make data portable for example: building it in to any new social network wouldn't be all that difficult, and perhaps adds even a little mental structure over the data since now there are two types of output: the website/app itself, and the portable data export. Sounds good to me.


The crowd here is so cynical that I'd start to be suspicious that Zuckerberg is pulling a reverse psychology move on us.


Making data portable means opening up our systems to Facebook, at least as far as I understand Zuckerberg's proposals.


Yeah but my point is, are you more likely to migrate to Facebook or from Facebook at this point? Or are you describing a situation where you use both platforms but they have to interoperate so it's winner take all in the sense that you sit in one platform and it views all of them? Even if Facebook thought that could work, why take the risk? They already have a monopoly position, Microsoft didn't try to force Linux to be compatible with it they just tried to crush it.

My own take is that this would be a bad move for Facebook and is either genuine charity or hubris from Zuck, he genuinely might think Facebook is a great platform that can out-compete rather than just a network effect monopoly.


I am quite surprised Sheryl Sandberg all but disappeared in the past few months from these discussions.


He just asks government to make some laws so he can just obey them and forget about the whole thing. He can't figure how to be good so he asks for explicit rules to be legal. Because business only needs to be legal to continue.


The CEO has fiduciary duties related to not making any decision that would hurt the corporation. That's really all you should need to know about Zuck and the regulations he proposed.


Is there any surprise?

None.

It is good to see this being discussed. I am not making light of that.

The more I see the market shape our Internet, the more I feel some baseline law, to set reasonable, ethical norms, makes great sense.


Yes, that was pretty obvious.

The elephant in the room is advertising. This is all about ad targeting. Zuckerberg's "rules" totally ignore that. Of course.


This is exactly why regulation is needed: Articles like this, spreading misinformation and clickbait should not be allowed to be published.


On the one hand it's nice to see FB admitting to not being able to come up with a solution for their hate speech problem. On the other hand, putting it on the government when the government is struggling financially and FB is rolling in money, well, I don't know about that. They ought to reap the whirlwind they created and implode as things like the NZ attack become more common.


This article tries to make the case that, among other things, the GDPR "hurts everyone except Facebook". Wait, what?

The GDPR is a net positive for citizen privacy. This article's attempt to spin it as an anticompetitive bigco-driven monstrosity is factually wrong and ignorant of the GDPR's history.

I have similar (but less strong) doubts about this article's case that automated censorship is obviously better for "everyone except Facebook" if the rules about what to censor were set by tech molochs and not governments. I'm wary of any kind of censorship, but if we have to have it, I prefer the rules to be set in a public parliament and not some boardroom somewhere. "Corporate responsibility" my ass. At this scale, Facebook are public infrastructure and should be regulated as such.

I'll gladly believe the author that Zuckerberg will always and only do things that benefit Facebook. Zuckerberg has shown time and time again that every single public initiative he takes turns out to be in Facebook's direct benefit and often in everybody else's disadvantage. Of course there's a hidden agenda here, there always is.

But the author hasn't found it.


I just want to know if anyone is surprised by this.


This is absolutely true. We all need to realize that regulations such as European GDPR, Article 13 etc. hurt newcomers a lot more vs. incumbents big players such as Facebook. Facebook will one way or the other nudge/influence/incentivize users to click on that "Agree" button to opt into sharing data with them. However, new players who don't have a surface where users come frequently enough, will be at a loss. And there will be no way they would be able to use growth tactics such as Facebook does to drive opt in.

Please wake up and smell the coffee. Regulations are needed but GDPR is absolutely not the answer.


This article is wrong when it comes to GDPR. It could be a good thing. I friend invited me to MeWe via Twitter post and I accepted. I would like not to be able to export and if I wish delete my data. That would be my all of my data, including my contacts. I would import this data to MeWe, or what's possible. At the moment Contacts would do. Good for me and MeWe. This is what GDPR would allow. There is no open format for this exchange, but being on the market, someone will quickly come up with a service to convert the data, or MeWe guys could some up with import data from FB function.


Right!


Anyone want my Facebook password? I haven't used it in a while, but don't trust them to actually delete anything. Might as well poison the well.

edit:

u: james.childers@gmail.com , p: zaZLc&ncedu%X8apb$j9

Have at it.


I just reset your password, and logged out all active sessions.

I appreciate the gesture you made, but you're exposing your friends and families to the random dregs of the Internet by giving away your account like that.


Thanks, I guess?

> I appreciate the gesture you made, but you're exposing your friends and families to the random dregs of the Internet by giving away your account like that.

That's kind of the point. FB isn't going to change until it starts to suffer, and the a good way to cause it to suffer is to reduce its usefulness.

The best way to do that is to make the feed useless.

shrug Whatever. The account's locked now and I don't really care enough to unlock it.

Interesting experiment.


Real question.

How does making your family deal with extra bullshit from the awful parts of the internet make facebook suffer? I don't understand your logic with this.


a) It's debatable how much people "suffer" from fucking with Facebook. Facebook is evil. Fucking with them is good.

b) Facebook will only suffer if the feed becomes (more) useless.

I don't understand how you don't see the logic. Bad feed -> worse FB experience -> decreased usage.


I don't think you addressed what the previous comment asked:

> How does making your family deal with extra bullshit from the awful parts of the internet make facebook suffer?

Assuming this is your real account with your real friends/family (which seems to be what you're saying, since you're convinced this would give real people a bad feed) then you're just exposing people who trust you to trolls.

I could text your friends/family with spam, and since it's coming from someone they know, they might click it. I could ask them for personal details. I could just straight out insult them out of the blue and hurt their feelings, or send awful things their way.

I really hope nobody did and in general would like to think people here wouldn't— but this is a public site and the internet is a big place.

I doubt for your friends this would register as "their feed becoming useless"— it'd mostly just register as you being awful out of the blue, until you explain you were hacked. In any case, creating no content and letting people know you are only reachable through other services is probably a more effective way of making the Facebook less useful for the people you know. You could also just post noise yourself if you were committed to do so.


Facebook makes money by selling advertising space on its feed. It builds advertising friendly profiles of users that marketers use to target for their ads. You can setup a Facebook ad campaign today and see it for yourself. OP made its profile open for anyone to log into and potentially make edits. For example, changing his birthday, connecting to new people, new likes and dislikes, are all signals back to FB. They would no longer have a realistic advertising profile on OP. If everyone did what OP, then the overall advertising value proposition to marketers greatly diminishes. They can no longer say with a good degree of accuracy, "I am targeting 20 year olds who work in tech."


Don't worry, I understand that, and I know that's what OP was going for. My point is that OP didn't seem aware that it opened a vector of abuse for family/friends. A commenter mentioned:

> I appreciate the gesture you made, but you're exposing your friends and families to the random dregs of the Internet by giving away your account like that.

And OP responded with:

> That's kind of the point.

I don't think OP thinks the point is to expose friends to potential damage, just noise and nonsense unrelated to him. I was explaining that if that's what he wanted, he can "post noise" himself, as opposed to opening his account to the public internet to ensure the safety of his acquaintances.


What damage? It’s fucking Facebook. It needs to die. I cannot imagine a situation wherein some dickwad posts something that would cause more damage than they already do, and the long term gain, should a movement start and Facebook become infected by rampant noise, would be a net positive.


Sigh. I see you still don't see what I mean. I invite you to analyze the earlier replies not only by me but also by other people who thought what you were doing wasn't the best approach, because no one disagrees with your motivation, just your execution. Have a good one.


As usual, big business doesn't support free markets -- it supports regulations that it can deal with more easily than its competitors.


Imagine not being able to start a little grass-roots, home-grown social network without millions in capitalization for compliance. That's exactly what Zuckerberg wants.


This is very similar to Amazon pushing for higher minimum wage. Big lawyers of Facebook can deal with any regulation and no regulation would pass without Facebook's approval. This essentially is rent seeking behaviour and that is why I have stopped using Facebook. I only need to move away from Whatsapp now.


I say repeal the law that exempts them from liability. It has had an un-intended consequence of allowing large pseudo monopolies to emerge. It has also given them an un-fair advantage over TV and newspapers who are liable for their content.


If you apply that standard be careful with any walls on your house because you would be liable for death threats spray painted on it. Look at Gawker - they are liable for /their/ content. Not user content.


Revoking the abilities of companies to limit their liabilities is very hard. Very few companies could exist (certainly no public ones) if there was no bottom to what personal assets could be clawed back in a corporate bankruptcy.

I’m not arguing that limited liability is perfect - just that we’ve built much of capitalism (including our ability to invest) on top of it.


Zuckerberg ruined the web, made sure that Facebook becomes synonymous with the internet for billions of people to make profit for himself using all means and created a generation of mentally unstable teenagers. This alone makes him among the worst criminals of the 21st century.


As far as I'm concerned, the Internet has and always will have only one set of rules: https://archive.org/stream/RulesOfTheInternet/RulesOfTheInte...


And you already broke them just by linking them.


Rules 1 and 2 only apply during raids (on another site), and no-one is raiding this comment section, so no, they did not break the rules.


That rule is not in the rules.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: