
Proposals to Amend Section 230 Share a Similar Goal: Damage Online Users’ Speech - DiabloD3
https://www.eff.org/deeplinks/2020/06/two-different-proposals-amend-section-230-share-similar-goal-damage-online-users
======
leetcrew
most of this stuff sounds like bad news, but I have felt for a while that we
need to address the issue of moderation on the largest social media platforms.
I don't think it should be a binary choice between safe harbor and doing any
moderation whatsoever, but there ought to be some sort of gradient for
liability. that is, the more you remove stuff you don't like, the more
accountable you should be for whatever stuff you continue to host. as an
extreme example, suppose you have a billion users and a hundred of them are
making libelous posts about some guy. you shouldn't be able to remove
everything but those posts and claim safe harbor because it's user generated
content.

~~~
csnover
> the more you remove stuff you don't like, the more accountable you should be
> for whatever stuff you continue to host

It seems to me that this just creates a perverse incentive to never moderate
anything. How would that be an improvement? At a minimum, you’d end up with a
space overrun by spammers.

~~~
dx87
The platform can provide tools that allows users to moderate what they want.
RES on reddit is good at that, you can make it so you don't see posts from
certain users, filter posts and links, etc.

~~~
csnover
What happens if the users are the ones that moderate to include only the
libellous content, per the GP’s example? Is the provider then responsible for
that in this world where more moderation = more liability? Do the volunteer
moderators become the liable parties instead? What if a user is also an
employee of the company, does that make the company liable again?

While I like the idea of self-policing communities, it doesn’t always work so
well in reality (Slashdot’s comment voting system was great at burying bad
content during the site’s heyday—though it never eliminated it—now it works
pretty poorly because most of the audience has shifted away. Reddit’s
subreddit moderation system can be great, except for the racism[0].)

This seems to me like a poorly thought through solution and I’m not really
sure what the goal is.

[0]
[https://www.theatlantic.com/technology/archive/2020/06/reddi...](https://www.theatlantic.com/technology/archive/2020/06/reddit-
racism-open-letter/612958/)

~~~
josephh
GP’s comment isn’t referring to the tools given to volunteer moderators. They
are referring (and strictly so) to the tools that end-users can use to
moderate what they themselves want to see.

~~~
csnover
Oh, I see. My mistake. It makes no sense to me that a user choosing to censor
content would solve _any_ institutional problem so it didn’t even cross my
mind that someone would bring it up. Thanks for the clarification!

------
pjkundert
Liberty means liberty for all, _especially_ those whom you intensely disagree
with. It also means allowing someone to do or say something you consider
stupid -- because, someone, somewhere will consider whatever you're doing
stupid; do you really want some random self-righteous inquisitor shutting your
speech off?

Furthermore, if some group is really proposing or performing heinous, illegal
acts, we should _want_ to have them feel comfortable associating with each-
other in public. Police can then _do their job_ , infiltrate these groups, and
track down, charge and prosecute these miscreants.

I just don't want to see it. And, they probably don't want to see me berating
them. There's a solution to these kinds of situations...

We're techies here; I find it surprising that no-one has considered K-means
Clustering: [https://brainbomb.org/Artificial-Intelligence/Machine-
Learni...](https://brainbomb.org/Artificial-Intelligence/Machine-Learning/ML-
Mixture-Models-K-means-Clustering/).

My hypothesis is that Google, FaceBook, et.al. _have_ investigated using
k-means clustering, and have decided against it, because it reduces the
"drama" on their platforms, reducing their revenues...

~~~
manfredo
> Liberty means liberty for all, especially those whom you intensely disagree
> with.

Correct, and said people have the liberty to create their own websites.

Freedom of speech also means freedom from compelled speech. Removing section
230 means all sites have to be like 4chan - devoid of moderation and hosting
the content of genuine Nazis and the like.

While it's fair to point out that moderation may be biased, forcing sites to
remove moderation for fear of being held liable for content users post is not
a positive change.

Note that the groups pushing the removal of section 230 are some of social
media's biggest competitors (namely traditional media). This is a calculated
move. Remove section 230 and social media becomes a cesspool. This pushes
users out of social media and back in front of cable news and newspapers.

~~~
carapace
Or, remove sec. 230 protections and social media becomes Disneyland and the
cesspool recedes to the "darknet".

(Whether that's a good thing or not I dunno.)

~~~
manfredo
This will not be the result. The scale of content posted on social media makes
it impossible to remove content that could result in lawsuits or charges down
the road. The only way for social media companies to avoid liability would be
to eliminate moderation. Either that or remove comments and user uploaded
content outside of a much smaller pool of vetted people - and at that point
it's not longer really social media is it?

This thread is being throttled, reply in edit:

Understand that removing section 230 means that companies are treated as
though they are the ones making the statements.that users post. One user
recruits terrorists, or participates in sex trafficking and the whole site is
responsible for recruiting terrorists and engaging in sex trafficking in the
eyes of the law. With stakes this high delegating moderation to users doesn't
cut it.

If companies like Facebook or Google don't have the resources to comply then
there's zero chance that a smaller company with less resources can do so.
There's a reason why removal of section 230 is being pushed by cable and
traditional news. It's a death sentence for some of their biggest competitors.

~~~
carapace
> The scale of content posted on social media makes it impossible to remove
> content that could result in lawsuits or charges down the road.

Impossible or just expensive?

Reddit amortizes moderation over it's user-base, couldn't they share liability
too? FB and Twitter could do likewise.

On the other hand, if it really is too expensive to moderate at scale maybe
these companies should be broken up?

------
lliamander
While I am concerned about the potential negative consequences to altering
section 230, there are several elements in the DOJ proposal that I think would
be excellent ideas:

> 3\. Promoting Competition A third reform proposal is to clarify that federal
> antitrust claims are not covered by Section 230 immunity. Over time, the
> avenues for engaging in both online commerce and speech have concentrated in
> the hands of a few key players. It makes little sense to enable large online
> platforms (particularly dominant ones) to invoke Section 230 immunity in
> antitrust cases, where liability is based on harm to competition, not on
> third-party speech.

Breaking up advertising monopolies (which all of these social media platforms
essentially are) would probably be a good outcome.

> b. Provide Definition of Good Faith. Second, the Department proposes adding
> a statutory definition of “good faith,” which would limit immunity for
> content moderation decisions to those done in accordance with plain and
> particular terms of service and accompanied by a reasonable explanation,
> unless such notice would impede law enforcement or risk imminent harm to
> others. Clarifying the meaning of "good faith" should encourage platforms to
> be more transparent and accountable to their users, rather than hide behind
> blanket Section 230 protections.

Is it too much to ask for social media companies to actually abide by their
TOS? How often have we seen people complain (outside the realm of politics)
about getting banned without explanation or appeal? If people are going to be
building their livelihoods around these platforms (for content distribution,
ad revenue, etc.) then I think it's time for these platforms to be more
transparent with their content creators.

~~~
akersten
The outcome of "breaking up advertising monopolies" can be achieved with
existing anti-trust law, without cannibalizing an existing protection that
benefits every UGC website large and small.

> Is it too much to ask for social media companies to actually abide by their
> TOS? How often have we seen people complain (outside the realm of politics)
> about getting banned without explanation or appeal?

A physical business can trespass you for any non-protected-class reason they
want. Online, one should expect no different treatment.

~~~
lliamander
> The outcome of "breaking up advertising monopolies" can be achieved with
> existing anti-trust law, without cannibalizing an existing protection that
> benefits every UGC website large and small.

I don't think it requires cannibalizing existing protections in order prevent
section 230 from being invoked in anti-trust cases. And for the record, I
don't think I agree with all of the DOJ recommendations, but I think there may
be room for improvement.

> A physical business can trespass you for any non-protected-class reason they
> want. Online, one should expect no different treatment.

I'm not entering into a legal agreement simply by entering a physical
business. However, terms of service are a legal agreement. A lot of tech
businesses seem to be under the impression that a TOS doesn't actual impose
any obligations on them, but in fact it does.

------
notadev
>Section 230 is an essential legal pillar for online speech. And when powerful
people don’t like that speech, or the platforms that host it, the provision
becomes a scapegoat for just about every tech-related problem.

Does anyone actually believe this is an authoritarian attempt to squash speech
that is "speaking truth to power"?

This is the result of years of work to remove any dissenting voice from the
generally Left-leaning sites of the Internet -- Twitter, YouTube, Reddit, etc.
First they started disabling comment sections, calling all opposition
"trolls". Then they tried to deplatform, go after advertisers, etc. Most
recently, two sites were dropped from AdSense because they had unmoderated
comment sections. Not only is Google saying what can live on it's platform,
but also what can live on the sites of others.

But wait...they're a private company and have every right to remove people
from their platform. Fine, then they lose all government protection from the
results of their choice to decide what legal information people can and cannot
see.

~~~
mcintyre1994
It's hard not to see this as an authoritarian backlash to "speaking truth to
power" when the President is arguing Twitter should lose 230 because they fact
checked his tweet. Fact checking a President's tweet translates quite
naturally to speaking truth to power.

~~~
chrisco255
It's also editorializing Twitter...ergo Twitter is no longer a neutral
platform and should be subject to all the same liabilities that Fox, CNN, etc
are including libel and defamation.

~~~
jmole
why should twitter be responsible for user content? Certainly libel and
defamation already apply to any material that twitter themselves post.

~~~
FpUser
Because they are supposedly active in moderating content (I am not talking
about taking down illegal stuff here). Moderating content to fit their
narrative arguably makes them a publisher with all the consequences.

I guess what they are saying (entities that are meddling with 230) is either
be totally neutral with the exception of taking down illegal stuff and spam or
face the outcome.

~~~
sgift
> Moderating content to fit their narrative arguably makes them a publisher
> with all the consequences.

The problem being that there is no proof that this happens. Despite all
attempts to construct some kind of "one side gets moderated stronger than the
other" narrative this stays a conspiracy theory.

~~~
ashtonkem
The other problem is that moderating content doesn’t make them the publisher.
That is _literally_ the point of section 230, giving platforms the ability to
moderate without becoming responsible for the content.

------
carapace
> any platform providing secure end-to-end encryption would face a torrent of
> litigation

I do not see this. By definition content transmitted with end-to-end
encryption cannot be moderated. How would anyone but the sender and receiver
know that a given message contains "illegal content"?

(I put "illegal content" in quotes because it makes my skin crawl, even though
in practice I think some content should be suppressed (but I also don't want
to be the one to say _which_ content should be suppressed _with the force of
law_ because that's hard-to-impossible, and my heart's not in it.) It's a
little like anti-smoking laws: technically a gross violation of personal
freedom, but one that I'm not prepared to vote down until smoking is a fringe
fetish.)

In terms of platform vs. conduit it seems to me that e2ee forces the service
provider into the "conduit" category.

~~~
matheusmoreira
Under this law, a host will be immune from liability only if they "assist
government authorities to obtain content (i.e., evidence) in a comprehensible,
readable, and usable format". A host that uses end-to-end encryption would not
meet this requirement and would therefore not be immune from liability.

~~~
carapace
If I encrypt an email and send it over the internet would my ISP become
liable?

~~~
matheusmoreira
The host is required only to "assist" the authorities so I don't think they'd
be liable for that. The encryption process was under your control and short of
disabling your internet access or cracking your machine there's nothing they
can do about it. If they encrypted your email themselves on your behalf and
threw the key away so that authorities couldn't get to it, they might be
liable.

Now that I think about it, sending malicious updates to a router might be one
way they could be required to assist the authorities in their quest for lawful
access...

~~~
carapace
Cheers!

I use a web-hosted email provider that offers e2e encryption (done client-side
with a Java app I think.) I don't actually encrypt any emails in practice. But
it would be defeating the entire purpose if a service like this could read my
encrypted emails (for the authorities or anyone else).

Thanks for taking the time to reply.

------
sub7
If Twitter has taught us anything, it's that everyone should _not_ have an
equal voice and an equal platform. A person or company's reach or audience
should be a function of 1) what they're talking about and 2) what credibility
they have re: 1.

I'm all for anyone being allowed to say anything but I'm completely against
anyone being allowed to amplify anything.

Also, the Republicans would do well to introduce legislation expanding data
portability and making it much harder for these completely unethical platforms
to keep your data locked into their walled gardens. Friends names, birthdays
and emails as a csv is basically impossible to export from FB for example.

~~~
lliamander
> If Twitter has taught us anything, it's that everyone should not have an
> equal voice and an equal platform. A person or company's reach or audience
> should be a function of 1) what they're talking about and 2) what
> credibility they have re: 1.

How do you propose we should establish credibility of sources? Also, how do
propose to factor in the originality of what they say?

> Also, the Republicans would do well to introduce legislation expanding data
> portability and making it much harder for these completely unethical
> platforms to keep your data locked into their walled gardens. Friends names,
> birthdays and emails as a csv is basically impossible to export from FB for
> example.

Agreed.

~~~
sub7
Let's put aside originality for now. I think with a few small deep structural
changes, you can get 80% of the way there.

I would have all posts require a tagged context, let's call it a topic (in
reality it'd be multi facet hierarchical object). I would also have all
profiles list the topics they'd like to be credible in.

The credibility score for a given topic would be an algorithm that would use
followers credibility scores to "rub off" on your score for that topic. You
seed the profiles you know to be credible and then propagate the graph. It'd
initially be a guesstimate and then gets refined as more humans rate, interact
and judge the post. Over a few posts, I think you'd be able to get a broad
credibility bucket for a given profile/topic pair.

Btw I've built a social network with tens of millions of users and we used
this approach and it works pretty well. It does require a bit of moderation
and bad actor weeding out but for the most part it works.

Also, FB and Twitter aren't full of retards (shockingly) They know they can do
this but they also know they would tank a shitload of their core quantitative
metric for a vague qualitative metric. It'd just never happen from within.

------
jjcon
I feel like the EFF has slowly devolved into siding with corporate freedom
over all else while only paying lip service to users.

~~~
fsociety
Why is that? S230 protects platform from trolls. Look at Twitch right now. You
could run up to a stream in real-life, blast copyrighted music, and get the
streamer banned for life.

How can you be held legally liable for content on your platform when you can’t
actually control that content.

~~~
burlesona
I think the answer is you can’t, and the implication is maybe those platforms
don’t actually need to exist.

------
tmaly
I hate to beat a dead horse, but we do have a right to free speech.

The issue on encryption should be addressed by 4th amendment protections.

We don't have to frogs in a pot of water. There is enough tech to give us
enormous reach to a broad audience. This debate does not have to happen in
some basement in congress with lobbyists hidden away from the public.

------
api
Let's say S230 was dropped. Would this really damage free speech online, or
would it damage _platforms_ and lead to a resurgence of the open web?

Edit: also would it genuinely endanger end-to-end encryption or only do so in
the context of these platforms?

It's a genuine question since I don't understand all the nuances of these
laws. If I put up a personal site without S230 protections, am I at risk? What
about truly neutral ISPs and cloud hosts that do not curate content but just
sling packets? What if all the data is encrypted from my server to the end
user anyway?

~~~
ocdtrekkie
Exactly. It would lead to a smaller Internet. Giant platforms built around
scaling past what human moderators can handle would be unviable, but subject-
focused communities and human-curated ones would thrive.

Bear in mind, the law is not going to actually or reasonably crucify you
because someone left a nasty comment on your blog while you were asleep
(nobody seriously believes this can happen). The law needs to be able to go
after companies that built social platforms that create extremely bad social
effects at scale, who are focused on creating as much engagement as possible
to fuel ad revenue.

Honestly, the "largest" social platform that would likely benefit from Section
230's repeal is Reddit: They'd have to make some policy changes and probably
discourage more cross-posting, and very large communities might need company-
hired moderation. But the general design of it being built around small
communities moderated individually is likely to allow it to adapt to the
change pretty well. Maybe they'd shut down the really large general subs, but
it'd be pushed more towards being a shared login way to be a part of a bunch
of distinct communities.

~~~
carapace
I agree. People seem to regard FB and Twitter as bedrock but I wouldn't be
sorry to see them forced to spend what it takes to actually moderate
themselves or shutdown.

I recently rejoined twitter (I watched a bunch of sci-fi shorts on the youtub
and wanted to tell people about them) and it is messed up. It's a hate
machine. People are so shitty to one another. And the doxxing mobs are fucking
terrifying, flash-mobs of hate, ravening and lawless.

For this we invented the transistor?

------
burlesona
Honestly I am not sure legal immunity for network platforms is actually a good
thing. Those platforms are very much like traditional publishers in that they
greatly amplify and spread the content posted on their network. Amplification*
without accountability is very different than freedom of speech.

* if you want to quibble with “amplification,” I would more specifically say that _asynchronous_ and/or _targeted_ distribution is the key characteristic that makes it logical for these platforms to be held to the same standards as traditional publishers in my opinion.

~~~
dilap
You basically just don't believe in free speech. Which is, historically, the
more popular position. The idea that ideas should be able to amplified &
spread w/o institutional control is rather radical.

~~~
burlesona
That’s false. There has always been a difference between freedom to say what
you want and freedom from consequences. The famous adage “you can’t shout
‘fire’ in a crowded theater.”

Newspapers are liable for libel suits and other legal remedies if they publish
false and damaging information. I think it’s fair to consider Facebook a
publisher when their algorithms take content and distribute it to the world
better than any newspaper could have.

That is fundamentally very different from self-hosting one’s own website, or
giving a speech in the town square.

~~~
lliamander
> There has always been a difference between freedom to say what you want and
> freedom from consequences.

No. Free speech is speech which is (at least largely) free of consequence. Not
just free of legal consequences either, but social consequences as well (such
as getting fired from your job for having certain political opinions). Free
speech =/= first amendment.

> The famous adage “you can’t shout ‘fire’ in a crowded theater.”

Which was used as justification to punish anti-war protesters.

> Newspapers are liable for libel suits and other legal remedies if they
> publish false and damaging information.

Which is part of a set of well known, long-standing exceptions to the first
amendment.

~~~
vharuck
>Free speech is speech which is (at least largely) free of consequence.

Only useless speech comes with no consequences. It's pretty much a tautology.
I don't see how your definition of "free speech" is useful. In what world
would it ever exist?

~~~
lliamander
Perhaps I wasn't clear. Unless you can voice an opinion with which others
disagree without legal or serious social consequences, you do not have free
speech.

