Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What Should Be Done About Social Media? (acm.org)
43 points by rbanffy on Nov 17, 2020 | hide | past | favorite | 86 comments


There's a lot of misguided talk about Section 230 in this thread, but I wonder if repealing it actually is the answer for a different reason: maybe websites, including social media, should be liable for the content they host? Repealing section 230 would effectively make courts the moderators of the internet, which is probably the closest institution we have to something that most people would accept as a moderator.

It would be prohibitively expensive of course, and would probably mean that social media could not operate as it exists today. But maybe that is the logical conclusion to this whole mess? Maybe the power to reach millions of people shouldn't be "cheap".

Perhaps the result is that there would be tiers of social media accounts. People wouldn't be going viral at random with stupid hot takes, and it would be expensive (liability insurance?) to be an "influencer".


That is the issue I have with people in favor of more content removal, because it would end right at that point if this is escalated further. But since for logistical reasons we cannot have legal proceedings for every internet comment, sites would only allow the most mundane of opinions.

It would make me angry, because we would loose something very great about the internet because someone got too offended and couldn't just shut off their device. It is like destroying the pyramids because you don't like edges.

Maybe a separate children-net would be the solution.


What do you think your liability proposal might mean for a website like Hacker News? Or any kind of online forum, Stack Overflow, etc.?


It'd be a lot harder to run them, that's for sure. This is not an easy problem. There are obviously a lot of upsides to social media: stack overflow is very useful, I think hacker news is a positive contribution to society, and even FB and youtube and such provide a lot of positive value.

That doesn't mean that they aren't also capable of producing enormous negative contributions as well. As useful as stack overflow is, how does the magnitude of that positive contribution compare to e.g. a Myanmar genocide?


That article was sadly devoid of much information. It basically stated some of the problems, asked some generic questions we've all heard by now, and then ended. Did I just read the opening to a longer article or something?


In context, that article was page 5 of CACM for this month. It struck me, as it was not about the same topic as the rest of the issue, that it was meant as a conversation starter. I'd expect more in the December/January issues in the form of letters from readers, and perhaps an issue delving more deeply in the first months of 2021.


What should be done about this, what should be done about that.. as if the author as any ability to impose their will and is not just shouting into an empty void


The solution is obvious, build a better platform for connecting to friends and families and things you find interesting.

Support the fact that we all have multiple identities and personas and roles in the world.

Support the fact that any given piece of content can be judged in an infinite number of dimensions... not just a few emoticons. Especially relevant these days, truth, politics, humor, cute are all orthogonal. Let the user make up their own ratings.

Support the fact that we don't want to share everything with everybody all the time. In fact, we don't want any new friends to automatically be able to dig back decades to stuff we shared with our friends back then.

Do NOT ever show me a photo of my dead pet, friend or relative in order to try to get me to recycle old posts.

If you really want to be ambitious, support capabilities... so I could give you a pass to look at photos of a project I'm working on, and you could forward that pass to whoever, and they wouldn't even need an account to see it... the pass is the key. The pass could be revoked at any time, if abuse happens.

Just some ideas... I'll build it, if a sufficiently large interest appears.

[Edit] Bonus points: Build a set of APIs that allow it to be accessed like imap folders are in email... with a number of clients and a federation protocol.


I don't think social media is going away, so we have to deal with it somehow. Regulation might be a good idea if we can figure out the right way to do it.

Another thing is that it would help if there at least existed large, successful social media platforms that aren't cesspools. We can't force people to leave Facebook or whatever if they don't want to, but having a good alternative platform that doesn't put the the most inflammatory content in front of the most eyeballs would count for a lot.

Social networks are a new thing and we're still figuring out how to moderate large numbers of people, but I think it can be done. I mean, a lot of things seem impossible until someone does it. Search engines used to be plastered with ads and nonsense on the front page until Google came along and demonstrated you don't have to do that. Wikipedia would have sounded like a ridiculous idea before it launched. You just need the right combination of people with a specific goal and the right incentives, technology, and the resources and expertise to build the product, and a minimum-viable community to bootstrap with.

(This isn't a new idea, but so far no one seems to have unambiguously succeeded. I wonder what the missing piece usually is?)


I see very little inflammatory content on my Facebook feed. It's all a matter of who you follow.


I feel like the people who discuss social media policies are people who don't even use it. I agree it may be quite toxic, which is even why I uninstalled all of it on my phone recently, but I know many who benefit (socially and/or financially) from tiktok, instagram, etc..


At what cost, and with what kind of externalities? Individual rewards that come at the expense of social / public health, the inability to have nuanced political conversations, and depressive symptoms for a broad segment of the userbase sounds like a "social ponzi-scheme" to me.


> I feel like the people who discuss social media policies are people who don't even use it.

Many of the people discussing regulating the fossil fuels industry are people outside of it: when you aren't directly benefiting from a system, it is easier to see what its externalities are.

> I know many who benefit (socially and/or financially) from tiktok, instagram, etc.

Again, this fails to factor in the externalities to people off the platform (or even the harms to other people on the platform). The cost benefit analysis can't just look at the people who actively use (and benefit) from something; if you had asked only people who worked in the refrigerator/aerosol industries in the 70s whether CFCs were a bad thing, you'd get a pretty skewed answer. See also:

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it." - Upton Sinclair


To me, it seems like the people who have the biggest issues are the ones who get called out for breaking platform rules or just don’t gain the kind of popularity they desire. Those who lie and astroturf and try to gin up fake popularity are in one camp; those who discover that their opposition is just more effective in another. Both camps want to regulate social media due to perceived unfairness. Neither one really seems to care about privacy for privacy’s sake.


It seems that way, because these are the people that are the most vocal about it. Indeed, most people who complain about this are just grifters. The reason you don't hear about normal people complaining about this is because they are nobodies, they have no other platform to speak out when they get banned.

The rules are almost always really vague and it's not clear what you can or cannot do. For example facebook can ban you if what you've said might be interpreted by someone as offensive. From what I've remember, they specifically say that what you've actually meant does not matter. If someone just might think it's offensive, you get a 30 day ban. At least they used to have this rule at the time I left facebook, and I left it precisely for that reason.


Nothing should be done about social media. It's just like being in a crowd of people. Free speech should be the rule.

Let's be honest. Big media companies want to control every narrative. They want to tell you what to believe and how to act. Social media democratizes that-- your family and friends can tell you how things are, wherever they are at.

Can people lie, or bend the truth to suit their narrative? Absolutely. Does the same thing happen from every other source of information? To some degree, yes.

Social media should be kept free.


People should leave in droves;

Not, because of anything the companies did, but just because it's unhealthy; whether a disinformation chamber or confirmation bias, or social bullying.

News should stop giving tweets credence; Senators and Presidents should use "official" means of communication with vetted material (ie Retweets are a bane).

Conversation could definitely be improved; Eliminate links, hash tags, and text on images. Give higher scores to tweets that receive neutral sentiment comments from with a group.


I think one answer is a new, lighter weight libel law that is a civil offense.

At the moment, current libel and defamation laws in the US have too high a test to overcome. In particular, the public figure can be stretched to cover online profiles. The tests of negligence and malice are also too strict.

Strict liability for untrue or derogatory statements posted on social media platforms could make the poser liable for 5k to 10k fines. This would go a long way in making discourse more factual and stemming hyperbole.


Self regulation is the only way forward. I think of social media as a drug, you can easily overdose off.

When I was younger this junk made me so miserable. You invite people you’ll never meet to judge every aspect of you . Been social media / online dating free for about 2 years. And I’ve seen massive gains income wise and my emotional well being.

Like other drugs self regulation is the only real path. You can’t stop people from hurting themselves


But isn’t there a lot of actual, governmental regulation in drugs? If you think of social media as a drug, wouldn’t it then make sense to impose similar regulations on them?


Bad metaphor. Drugs are heavily regulated by the government. I mean, personally I think recreational drugs should be legalized, but definitely regulated. Further, maybe you can’t stop determined individuals from hurting themselves, but policy and regulations absolutely solve problems at large: seat belts, traffic laws, FDA, manufacturing laws keeping led out of your kid’s toys, etc.


I don’t think regulation of drugs works.

If you want to post every detail of your personal life on social media and suffer the consequences , you have a right to .

Basically I don’t see anyway to stop people from hurting themselves via social media without violating their rights .

If I had a magic wand I’d essentially ban all social media/ dating apps. I personally see more downsides to these platforms than benefits. But I’m not a king . I can only control how I decide to spend my time


Sorry, I’m still hung up on the drug metaphor. Regulation of drugs does work. You know your prescription of X is not cut or laced with something because it’s regulated. It’s how you know how strong your alcohol is. It’s how you know what your buying is what it is claimed to be. I trust the pharmacy over the street corner. To reiterate, I’m pro legalization, but I’m also pro regulating.


I think the analogy works well in the sense that we know we don't want to live in a world where the government is restricting what types of websites are legal in the same way they decide what types of drugs are legal. Even trying to pin down what counts as "social media" is pretty difficult.


Basically I don't think drug prohibition works and thus , even though social media is very harmful, a social media prohibition would also fail.

You can only make the choice to not poison yourself.


More and more I am coming over to (what I interpret to be) Greenwald's view that the safe harbor provision is fine. I think the attack on safe harbor is a way to make it easy for supranational unelected and unaccountable entities like Facebook to engage in censorship activities which will limit discourse to whatever the power brokers in Washington deem acceptable. The social media companies must be reigned in, but not by suppressing speech. Instead, I advocate for their nationalization so that way the 1st amendment will unambiguously apply to their activities.


Do you really want to nationalize vast troves of private information and sophisticated ad targeting tools?


If "vast troves of private information and sophisticated ad targeting tools" are too dangerous in the hands of government, then why aren't they too dangerous in the hands of a corporation?

The former is democratically elected. Facebook and Google aren't even controlled by their shareholders.


Because governments have a monopoly on violence, and can use your data against you to imprison you, bomb your village, defame you, etc. I can quit Facebook or Google much easier than I can quit the TSA database or DMV record system.

And I don't think too many people would disagree that the best software developers, machine learning people, and computer security people are going to tech companies rather than work at government offices.

And finally, governments are influenced by temporary forces of who's in power, like we are seeing now in the United States. And whoever is in power can replace departments with people to do what they want, which can be borderline illegal. They only have to worry about the next N years, where N is often a number less than 5.


The good news is that with the death of the profit motive, machine learning applications will become less important for the operation of Facebook's core mechanisms.

Yes, you are right about the government, but I think FB is being used by the government to get around the built-in legal protections. Moving it to the other side might make those tradeoffs more visible, but also more fair.


I think I have a much dimmer view of Facebook (16) and Google (22), and a much brighter view of American democracy (232), than you.

Time will tell.


Why infect the democracy by shackling one of these companies to it? Companies basically all die eventually, but they’re much more durable the more closely attached they are to governments.

Facebook proper is losing droves of users in the US. We might do well to keep the playing field level and let someone else beat them. More dominant market players from the past have misstepped and gone under.


News flash, the government already has it. They either siphon it off into the NSA database, they employ spooks at the companies, or they have data sharing relationships with law enforcement. If we nationalize them, we can also kill the profit motive and eliminate the advertising.


The ability of a spy agency to gain access while deeply troubling to me, is quite different than turning it over to politicians. Would you trust you own favorite political villain with access to all of that?


I trust the spy agencies far less than I do my elected officials. These guys' job is to lie, cheat, steal, and murder at a grand scale. At least it is in an elected politican's job description to represent me. Also, many of these villains are getting briefings from and giving direction to said agencies already.


I trust spy agencies far more than elected officials. Those employees get vetted, elected officials don't (and they most likely lie to get elected as well).


Try reading about Iran-Contra or the NSA Snowden disclosures sometime.


As opposed to the wars and "wars" that politicians have started?


> Would you trust you own favorite political villain with access to all of that?

Yes. Information asymmetry benefits those in power.

Better an even playing field, where everyone has access to everything.

Or, you know, don't use social spy platforms if you care about privacy?


You can avoid social media all you want and still end up with a shadow profile that’s quite info rich. I’d prefer these companies to eventually fade away like most companies do, rather than become permanent government appendages.


I have almost never heard of a more dangerous solution to this problem.

Under absolutely no circumstances should the government control a large percentage of media.

Our current government seems pretty friendly to the First Amendment, but you have no idea if or when that might change.


Have you heard of the BBC? Try reading Manufacturing Consent. The government doesn't need to, they already get their message out in spades with virtually no opposition on anything they consider important.


The BBC is not a monopoly. There's a huge difference between a state-run public broadcaster and a state-run monopoly of social media.


Is an unaccountable corporate monopoly with a revolving door with government better?


They should be forced to federate their services to counter their anticompetitive monopolies.


make the content private, membership invite only, an ignore list function, and you fix just about everything. ad sales take a hit though.


It should be shut down for the benefit of mankind.


It seems to me that social media companies should only be allowed the legal protection of being "platforms" if they are truly neutral. No censorship, no algorithms.

Facebook should only be allowed to show you a feed of all your friends' posts, in chronological order. Twitter should only show you the tweets of people that you've followed, in chronological order.

Aside from solving any potential censorship or manipulation problems, it would also make a much more pleasant user experience than the unnavigable garbage that is Facebook's current product.


The more I think about this, the more I find myself coming around to a hybrid of your idea (strict neutrality) and Zuckerberg's original position (moderation of free speech is not their job).

Facebook et al.'s fundamental sin was optimizing the sorting and broadcasting algorithms.

I get it drove growth and made them the giant companies they are today. But it's not really a core competency, and now it's more trouble than it's worth. In fact, it's metastasized into an existential threat to them.

They should announce they're going back to naive ranking and sharing, and that they expect all external moderation requests to come through legal channels. (And then spend their billions digging the widest moat of integrated services and infrastructure they can)


They should announce they're going back to naive ranking and sharing, and that they expect all external moderation requests to come through legal channels. (And then spend their billions digging the widest moat of integrated services and infrastructure they can)

The trouble is that algorithmic feeds work really well. If they remove algorithmic feeds in an environment where competitors are still able to have such feeds, then they will be outcompeted in the long run.

This is one area where legislation and strict enforcement barring algorithmic feeds would work, though execution would be difficult.


Or at least give the user a preference as to the algorithm/sorting method they want (or the ability to implement their own! Will never happen, sadly.) I think it’s the lack of transparency into the curation. and lack of control over it, that is the issue, not the particular queries being run.


Most people have no idea how to change preferences, so whatever is default might as well be the only option. This is why Google pays billions to be the default search engine, not even the forced/only option.

It matters so little that it’s kind of odd that these sites don’t give the option, so at least they could use it in their arguments to show they have it. Probably only 0.01% of users would ever actually use it.


Make a secondary market for algorithms and split part of the revenue with twitter. I'd love that.


So all your twitter data and interactions like clicks and time spent will be sent to whoever gets you to install their algorithm? I can see why Cambridge Analytica seemed like a good idea to some people now...


Almost all of that stuff is public data which can already be scraped.


Section 230, is explicitly no censorship. The problem is that twitter has gotten big enough that you can't seem to punish it by removing its Section 230 protections. As that would essentially be the death penalty for the platform.

Hence why supposedly there's an exodus to Parler.


Section 230 of the Communications Decency Act makes no designation or distinction of “platforms”. You have deeply misunderstood the law.

What it does is to make clear that people or organizations who run internet sites that allow 3rd party submissions will not for the purpose of the law be considered the “speaker” of the content in those submissions. It also states that this hold even if they moderate including editing the content to conform to internal standards.

The law specifically allows censorship, algorithms, banning, and editing or anything users submit. The whole purpose is to clarify that content hosts are allowed to do this without being held liable for the content of the speech they are moderating.


Parent and link speak to should, not must. I'd assume that they start from a position that current laws are unsatisfactory.


The parent seemed to imply that there should be no permissible moderation at all, which strikes me as undesirable. Also the use of “provider” is common to factually incorrect descriptions of the law.

I would also add that no one prevents the original poster, or anyone else from using other services. I mean, here we are on a narrow social media site discussing the topic.


I think moderation has become far too strict on the net and I have seen countless examples of censorship due to bipartisanship. I don't think all people want that. If people prefer it that way, there need to be separate spaces.

But I believe this is at the core of the issue when people complain about moderation. I think it is very true, even sites like Wikipedia are negatively affected on political topics while paid external editors are free to roam to push up articles about themselves or their companies. This is the worst kind of moderation I can imagine.

If we talk about moderation against spammers and scammers, a vast majority would agree on the other hand. But these are two separate issues in my opinion.

> I would also add that no one prevents the original poster, or anyone else from using other services.

I think there is concerted effort to purge alternatives in the current market situation and I think this nothing else than an excuse to justify certain content moderation. I think platforms have this freedom, but there are also consequences to that.


Parent said that presentation of originally-submitted content should be 1:1 and neutrally ranked (e.g. by timestamp).

That seems a fairly modest proposal.


It's a great proposal for a product roadmap, or building a competitor but an awful proposal for legislation. Why should the government dabble in deciding how social media builds its product? You'll just crate a mat that entrenches FB and the like.

I think people VASTLY overestimate the influence of things like the FB feed on the real world. The best evidence they ever seem to have for any claimed effect is the number of people FB say saw/interacted with a post. They have a long history of inflating those numbers, yet the critics are all too happy to use them. The activity on social media that seems to actually result in real world outcomes seems to generally be the boring work of community building.

Also, how do you deal with spam or noisy posters in a "1:1 and neutrally ranked (e.g. by timestamp)" feed?


Who said anything about Section 230? The GP was describing what he thought the law should be, not what it is.


I barely use twitter, I follow like 2 people. But I have still noticed how offensively useless the timeline is. The timeline has never been anything more than an out-of-context jumble of nothing, even with how curated my following list is. It is actually shocking to me how utterly incompetent it is. It takes what I want to see and actually somehow strips it of value. There is no reason for me to use the timeline in any respect ever, but it's still the landing page.


Agreed. Tweetdeck is able to have a normal timeline, but the damage is done, all the people are like us, we've all stopped using it. It's on a downward spiral.


Most people strongly prefer when you don't show them a chronologically ordered feed.

I don't think we should design our legal frameworks around arbitrary user-hostile restrictions.


^ this. Even among people who claim to prefer chonological feeds, double-blind tests show a strong preference for algorithmic feeds.

If grandparent is saying “algorithmic feeds are so popular-yet-dangerous that we should regulate them like alcohol” then ok, that’s a valid starting point - but we should at least acknowledge that this is user-hostile in the short term.


I am so tired of this line of argument. Those are totally arbitrary suggestions without any consideration for what is practical or legally plausible.

On what basis should website owners lose their right to determine what goes up on their own website? What about the creative control one's own work? What about free speech for the site creator? What about the property rights owned to those paying cash to keep the site operational? The law should not attempt to dictate the specific ways in which it is legal for computer programs to sort information for display on a website... this is absurd.


I fully agree that website owners shouldn't lose the right to determine what goes up on their own website, but with the caveat that this right also comes with legal liability for anything that's on there.


So in effect you're saying moderated message boards should be illegal. Why should that be the case?


No algorithms?!


I agree. Isn't that what we get if section 230 is repealed?


I don't think so, section 230 is only about the platform being on the legal hook for something that's posted.


To quote the law

“ No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”

It also provides protection for all good faith moderation even if constitutionally protected speech.


However, when Facebook “fact checks” a post, they become a publisher.

And the moderation exemption is very specific to types of content. It doesn’t, for example, allow blanket moderation for political reasons. It specifically mentions lewd/excessively violent content. It isn’t a blanket “moderation” exemption.

Also, “good faith” is explicit. Moderating one political viewpoint while allowing another isn’t good faith. Stop the Steal is censored. Many ANTIFA groups aren’t.

From the law:

No provider or user of an interactive computer service shall be held liable on account of— (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).


> However, when Facebook “fact checks” a post, they become a publisher.

Says who? I hear this over and over, but there's no legal basis for any of this. It's horse-dung.

> It doesn’t, for example, allow blanket moderation for political reasons

Who decides what's political?

> Moderating one political viewpoint while allowing another isn’t good faith.

Define "good faith". Also, you missed the "otherwise objectionable" part of that law. What's "objectionable" to one platform may not be to another. If I run a message board for adherents of a religion that say, opposes gay marriage, should I not have the option to censor posts that support gay marriage? If I run a message board for environmentalists, do I have to let climate change deniers write whatever they like?


Replying to your first point, I think it's when Facebook adds their own original content. The bit of law quoted above mentions:

> shall be treated as the publisher or speaker of any information provided by another information content provider

Emphasis mine. I think that implies any information provided by themselves they're on the hook for.


Which is fair enough. If they write a fact check that turns out to be libelous or illegal in some way, they are liable for that specific piece of content. I think the law is clear about that.

GP was saying something else entirely. They were contending that if FB moderates or fact-checks even a single user post, they are now the publisher of everything that users post on their platform. Which is not at all how the law works.

Platforms are still liable for content that they author and post themselves. They just aren't liable for what their users post, even if they perform moderation or "censorship" on those users' content.


The phrase “otherwise objectionable” is so vague that courts have basically interpreted it to mean violations of community standards, or basically whatever. I’m no fan of any kind of political censorship, but they do own the servers. They’re under no obligation to host any content, and we’re all free to go elsewhere.

That said, I think most of their moderation is awful and their fact checking is ridiculous at best.


Social media are amplifiers for pay. What a ridiculous idea that we should leave such devices scattered about without moderation.


Half of this proposal would neuter the amplifier aspect of social media.


> First, there is the difficulty of vetting content from a very large number of users.

But they do, and often with political overtones, thus claiming 230 doesn’t make sense. As soon as they “fact check,” they have become an arbiter of truth and thus a publisher. The complaint that “it’s hard to hold them to 230 because of scale” really isn’t our problem. If they can’t keep control of their own platform, that doesn’t give them a free pass. Either they are neutral and enjoy 230 protection or they aren’t neutral and thus get treated like a publisher. Hiding behind “community standards” that are opaquely and inconsistently applied seems to be their strategy. But how can a private, Facebook group be violating standards they didn’t create? The community should set the standards, not Facebook, at least with closed groups. But that’s not how it works, thus Facebook is applying their own standards, which, makes them a publisher because those standards are arbitrary and not documented.

Either you take an editorial stance (which they have,) and surrender 230 protection, or they don’t and keep the 230 exemption.

The problem I have with Facebook isn’t that they take action against targeted harassment or purely illegal content (child porn, etc.,) it’s that they take action against ideas they disagree with. For example, posting an opinion article about election fraud and gets flagged/tagged/suppressed despite it being literally an opinion.. at that point, by labeling content as “wrong,” they’ve taken an editorial stance. Even something like fact checking is editorial — you are decorating content because of the ideas expressed and such decoration isn’t evenly applied, it’s applied based on a particular viewpoint, which again, is editorial. They are, both directly and indirectly, taking sides on an issue under the guise of “preventing misinformation.” However, that’s a much different than policing content harassing specific individuals.

Example: the Stop the Steal Facebook groups were suspended despite their being nothing illegal about those groups. They were groups committed to a certain opinion. However, Rose City ANTIFA, a group that has been involved in violent protests, their page still stands. What’s the difference? One group is pissed at the system and another group is, well, pissed at the system. One happens to be on the right, another happens to be on the left. However a big difference is that the Stop the Steal groups don’t make violent protest a fundamental part of their message. I don’t care to debate the merits of either side, the point is when Facebook takes a side, they are no longer operating within the letter or the spirit of Section 230. If we were talking about designated terrorist organizations, then removing their content is uncontroversial and consistent with the law. But Republican groups aren’t legally designated terrorist organizations anymore than BLM is. So a decision to remove one and not the other is editorial.

Were articles about Stacy Abrams’s claiming a rigged election in 2018 tagged with fact checks? No. Did articles about her claiming to have won tagged? Nope.

Facebook is entitled to do what they want, however my point isn’t that Facebook should allow everyone and everything; my point is that can’t take an editorial stance or even “fact check” and still expect Section 230 protections.


> In fact, the proliferation of "bad speech" on social-media platforms has become politically untenable, and now all social-media platforms are actively fighting "bad speech."

"Bad speech". That's rather orwellian and social media platforms should refrain from censoring "bad speech". Even legally prevented from doing so if necessary. Free speech protects bad speech. And yes, everyone knows these companies are "private" companies.

> More fundamentally, however, do we really want Facebook to regulate the speech of more than 2.5 billion people?

Of course not. This is the problem with the pro-censorship people who view facebook as being evil and yet want them to be the arbiters of speech for billions of people.

> The basic policy question—how to regulate speech on social media platforms—seems inseparable from another policy concern, namely how to deal with the concentration of power in technology.

Disagree. Regulating speech and anti-trust are two separate issues. If we break up these tech companies and they all collectively continue to censor "bad speech", then we are right back to square one.

1. If these companies want to be platforms, then force them to be platforms. No picking sides, no censoring content. Whether it is to appease Trump, China, Israel, Saudi Arabia, DNC, etc. No legal content should be censored on these platforms. Make it a crime - meaning prison for CEO, board of directors, etc.

2. Maybe we should expand anti-trust from monopoly to duopoly or maybe higher. Why not say having at least 3 genuine competitors in a particular sector or industry is a societal good? So if a sector like search or OS or social media is dominated by 1 or 2 players, we use anti-trust to create more competition.

3. Beyond anti-trust, maybe if one company is just too big, even if it is not a monpoly, perhaps there is justification to break them up as well.

But the overriding concern at the moment is issue #1. Censorship. Then we can worry about anti-trust.


Social media companies should be publicly owned and regulated, paid via taxes, not for profit, with strict regulations for privacy and security. A partially federated design would help to further realize the original intention of the internet.

This seems unlikely to occur anytime soon, however, not even considering the bureaucratic complications with ensuring a fair design and implementation, in full consideration of the needs of NatSec and LE relative to civil liberties. A good start might be to consider the types of governing bodies necessary to ensure that every interest group is satisfied (if such a thing is even possible).

The current gross monopolies on search/social/ecommerce are anththetical to American values. These monopolies yield gross concentrations of power and wealth for parties not properly accountable to the public. This is fundamentally wrong and a danger to the fabric of our democracy.


I'm not sure much would change if it were publicly owned. You'd still have political interference from whoever is in office. The same arguments over what's "disinformation", with the ruling party having a clear bias ("oh that negative story about me? disinformation").

And as well, no doubt the intelligence agencies will have a field day getting access to a public social media platform.


All great points, which is why the architecture of a governing body that aims to satisfy the interests of end users while addressing your other concerns is foundational. The governing body wouldn't be fully under the authority of the three branches of government. Rather, a hybrid model that gives the people direct interest in social media governance would allow for desirable mutual outcomes.

The main goal is to remove the capitalist and private interests, while leveraging the good parts of our elected government, and also to directly integrate voices of the people.

The entire exercise is thus to develop such a governance specifically for social media. We could even create a "digital constitution" to re-think our democratic experiment in the age of social media.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: