
Should Facebook, Google be liable for user posts? asks U.S. Attorney General - jhatax
https://www.reuters.com/article/us-internet-regulation-justice-idUSKBN20D26S
======
danShumway
There are 3 options for moderation:

1\. Platforms with no moderation (8Chan -- except probably even worse, because
even 8Chan moderates some content)

2\. Publishers that pre-vet all posted content (the NYT with no comment
section)

3\. Platforms that retroactively moderate content only after it's been posted,
in whatever way they see fit (Twitter, Facebook, Twitch, Youtube, Reddit,
Hackernews, and every public forum, IRC channel, and bug tracker ever built)

Revoking section 230 just gets rid of option 3. It's not magic, it just means
that we have one less moderation strategy. And option 3 is my favorite.

Option 2 takes voices away from the powerless and would be a major step
backwards for freedom of expression. It would entrench powerful, traditional
media companies and allow them greater control over public narratives and
public conversations. Option 1 effectively forces anyone who doesn't want to
live on 8Chan off of the Internet. Moderation is a requirement for any online
community to remain stable and healthy.

Even taking the premise that Twitter is an existential threat to democracy
(which I am at least mildly skeptical of), it's still mind-boggling to me that
people are debating how to regulate giant Internet companies instead of
implementing the sensible fix, which is just to break those companies up and
increase competition. All of the "they control the media and shape public
opinion" arguments people are making about Facebook/Twitter boil down to the
fact that ~5 companies have become so large that getting kicked off of their
services can be at least somewhat reasonably argued to have an effect on
speech. None of this would be a problem if the companies weren't big enough to
control so much of the discourse.

So we could get rid of section 230 and implement a complicated solution that
will have negative knock-on effects and unintended consequences for the entire
Internet. Or, we could enforce and expand the antitrust laws that are already
on the books and break up 5 companies, with almost no risk to the rest of the
Internet.

What problem does revoking section 230 solve that antitrust law doesn't?

~~~
slg
I would generally agree with everything you said here except that Option 1 is
really just Option 3 except the "way they see fit" is very minimal. Moderation
still exists on those "unmoderated" sites. No right-minded person supports
completely unmoderated content like no right-minded person supports completely
unregulated free speech. Child porn is the most obvious example of an
exception to both. We can all agree that we don't want to see that and don't
want to host it on our platforms. Once you accept that, it basically becomes a
question of negotiating where that line is. It is reminiscent of that old
inappropriate Churchill joke about haggling over price [1].

[1] - [https://www.goodreads.com/quotes/300099-churchill-madam-
woul...](https://www.goodreads.com/quotes/300099-churchill-madam-would-you-
sleep-with-me-for-five-million)

~~~
SkyBelow
I think a difference of kinds and not just degrees can be established between
moderating just illegal content.

But maybe not, given that there is a lot of different interpretations of what
is illegal and judgment calls have to be made over that, as well as issues of
jurisdiction and even issues involving laws that may be unconstitutional.

~~~
slg
The problem with just limiting it to illegal content is who gets to decide
what is illegal? Websites don't have jurisdictions in the classical sense.
Should website follow German laws and ban Nazi imagery? Should they follow
follow Polish law and ban blasphemy? Should they follow Russian law and ban
homosexual imagery? Should they follow Chinese law and ban support of for an
independent Hong Kong?

~~~
mirimir
Some huge percentage of Western mass media is illegal under Saudi law.

------
protomyth
It really seems like this article is a bit off on the reasoning they ascribe
to people. The biggest objections I have heard is that Facebook / YouTube /
Twitter should now be classed as "publishers" and not "providers" because of
the perceived bias in their removal of individuals and content.

~~~
cmac2992
I hear that argument a lot as well. It's a very strange argument because
publishers don't have a legal requirement to be "unbiased".

~~~
jeffdavis
Common carriers (like a telephone company) are content-neutral (unbiased) and
have no legal responsibility for what is said. If someone mails you a
threatening letter, you can't sue USPS.

Publishers can publish (or decline to publish) whatever they want, but they do
have some responsibility for what they say. Libel, copyright infringement, and
threats all carry consequences even if the publisher is not the author.
Publishers can be as biased as they want.

Is FB a publisher or a common carrier? What about Google? Youtube? Instagram?
Twitter?

(The answer is that they are kind of either depending on the exact part of the
organization. And they are having it both ways: control without
responsibility.)

~~~
koboll
>Publishers can publish (or decline to publish) whatever they want, but they
do have some responsibility for what they say.

It is abjectly insane to label a tweet as something Twitter, the company, is
"saying". If we're retooling the law to orient it toward that definition, then
the inevitable endgame, after the avalanche of litigation, will be that the
concept of posting text on social media or blogging platforms is dead. It
kills Web 2.0 in its entirety. It regresses the United States back to the dark
ages of a completely one-directional media, where the best you can do as an
outsider is to submit a letter to the editor.

~~~
gadders
Well, by the fact they are allowing some voices and silencing others, largely
in one political direction, it would seem they are expressing an opinion
indirectly.

I don't think anyone wants Twitter to be responsible for every tweet. What
most people want them to do is be more like they used to claim to be "We are
the free speech wing of the free speech party"

~~~
Fjolsvith
Maybe not disabling accounts, but they do shadow ban:

[https://metro.co.uk/2018/09/06/twitter-admits-
shadowbanning-...](https://metro.co.uk/2018/09/06/twitter-admits-
shadowbanning-and-unfairly-filtering-600000-accounts-7920206/)

------
cassalian
Can anyone explain why William Barr seems so intent on trying to change
technology in such major ways as this? Does he not understand the implications
of the things he proposes? Or worse, does he understand the implications and
proposes them nonetheless? I just don't get this guy's motivation.

As far as I can tell, revoking section 230 would just result in people putting
up fake content themselves and then suing the platform they posted to. Is
there a reason why this wouldn't be possible?

Also, I see a lot of people focusing on major platforms, but why wouldn't such
changes also impact tiny sites? In particular, it seems that anyone casually
hosting their own site (not something they focus on often), will be forced to
remove all user generated content or quit their day jobs to manage their site
- am I misinterpreting the implications here?

~~~
Nasrudith
Because Barr is literally a fascist with a long history of violating civil
rights. He wants to rule unquestioned, unopposed, and unaccountable. He
doesn't care so long as he can torture into compliance.

~~~
cassalian
My initial reaction to reading about Barr makes me want to agree with you.
However, I really haven't looked into him too much - tbh I only really know
about him from popping up in HN articles (where he is always cast in a
negative light). Any chance you have some links relevant to his history of
violating civil rights, etc? Would love to be able to read through them to
form a stronger opinion myself.

~~~
Nasrudith
ACLU is one here listing many of his outright struck down ones including
sticking Haitain asylum seekers in Gitmo indefinitely. And that is restricted
to the obscenities that the courts have called out instead of accepting as the
norm.

[https://www.aclu.org/blog/national-security/william-barr-
has...](https://www.aclu.org/blog/national-security/william-barr-has-long-
history-abusing-civil-rights-and-liberties-name)

My personal philosophy is civil rights are not up for a debate or vote.
Normalizing them is itself a form of damage that mere politeness can never
justify.

~~~
cassalian
Thank you for the link! ^_^

It looks like Barr has been pushing pretty extreme views for longer than I've
been alive :/

I can definitely see why you would use the word fascist to describe him

------
throw7
I have no idea what Barr's exact problem is he's trying to solve? And if I run
a web forum/mailing list/etc. I am now liable for anything users say on these
services?

It seems like he's unhappy that facebook/google/et al. are shaping (or trying
to shape) a narrative... I mean, he's not wrong. But everyone is: businesses,
politicians, cia, hacker news.

Opening up people to easier liability for running a web forum, just means
fewer will be able to provide this type of service; not to mention, this
favors those with lots of money and time to spend on a lawsuit of such nature
e.g. the gov't and large businesses... hmmm, maybe that's the point, only the
gov't and monied interests should shape the narrative.

~~~
Faark
As you stated, everyone is already shaping the narrative. But right now nearly
all of the power to do so is highly centralized at a very short list of
companies. You already mentioned some tech companies, but I'd certainly also
add a bunch of classical media/news orgs like e.g. FOX and Disney.

We could now discuss if democracy can work with so much centralized power. But
the result we'll see is those powers fighting each others. It might be more
obvious than before thanks to our current polarized times and such simple to
identify targets.

------
ogre_codes
I don't think it's a good precedence to make companies liable for the posts of
users. I do think it's reasonable to examine the ways Google and Facebook
profit off of extremist views and surface extremist views algorithmically.

As soon as Google and Facebook moved to having an opinionated queue of content
(Youtube's suggested videos and Facebooks timeline) based on things like
engagement, I could see the argument made that they have both ceased being
mere conduits of information and became publishers themselves.

------
bearcobra
The question I have for people who advocate for "platform liability" is, at
what size should they become liable for user generated content? Facebook &
Google definitely seem big enough to most people, and maybe Twitter & Reddit.
But what about Y Combinator?

~~~
jeffdavis
Liability should not be black-and-white. HN has a process for flagging and
burying content, and they employ moderators. If some kind of libel is buried
in a greyed-out user comment or a flagged post, then no harm. If the libel
sits on the front page in the title of a post for two days, there's a real
issue there.

It's interesting that you mention HN, because it's basically not a problem
here.

~~~
bearcobra
That kind of goes to my question though. It seems that people are suggesting
that by offering a process to flag & bury content and by employing moderators,
HN becomes a publisher and therefore should be liable for our posts.

~~~
jeffdavis
To some degree, yes. Maybe not in the same way as a book publisher, but also
not zero like a common carrier.

------
gonational
I think a couple big concepts are being conflated. There are a couple really
important questions:

1\. Should Google, Facebook, etc. be responsible for user generated content
hosted on their websites (i.e., should they _not_ be treated as “public
square”)?

2\. Should the government have any hand in telling any company or any person
what they can or cannot say, as long as they are not making threats or
publishing illegal materials?

I am of the personal opinion that most (all?) of the major tech companies have
engaged in censorship and even politically driven enforcement of their content
policies, and therefore should have lost their “public square” status a long
time ago, making them responsible for illegal content posted by their users.

Pertaining the second question, there simply is no question; the government
does not and should never have any authority here, because the Constitution
protects free speech, regardless of what kind of Ministry of Truth they would
like to implement.

~~~
GcVmvNhBsU
I think question 1 should really be: should Google, Facebook, etc, be
responsible for the user generated content that they algorithmically focus
presentation on? It’s one thing to have, say, an early 2000’s forum where
users search for and consume information. It’s another to ignore timelines and
present content that will likely generate the most engagement. The latter to
me is editorialization of the content.

------
eyeinthepyramid
Will every post need to be pre-moderated to ensure that nothing objectionable
is published? I wonder how this would affect sites like Hacker News and
Reddit, or any forum sites really.

~~~
jeffdavis
I don't think the big tech companies should get blanket common carrier
protections because they aren't common carriers. They have knowledge and
control over content, and therefore should be responsible for it to some
degree.

But that doesn't mean they should have no protections or that thry should be
treated like a book publisher. There can be some reasonable processes amd
limits of liability in place.

~~~
eyeinthepyramid
Are you suggesting that there should be a size requirement before a site has
to pre-moderate their UGC? What metric would you use? Revenue? Taxable Income?
Impressions? Volume of UGC? # of employees?

If there's a blanket repeal of liability protection for UGC, it's going to
have a much larger impact on smaller forums.

~~~
jeffdavis
No, I'm not suggesting company size should have anything to do with it.

------
kryogen1c
> escape punishment for harboring misinformation and extremist content

its so bananas that statements like this are glossed over and unqualified.
this is not a solved problem, and its not even being treated like its a
problem at all.

this perception that there are a set of correct facts and incorrect facts is
just so meaingless. what does it even mean to be true? $Person is on $Video
saying $Statement. True or false? Well, it depends. It ALWAYS depends. are you
asking if $Video.Words = $Statement.Words? almost never. You are not
investigating $Video.Soundwaves and $Person.VocalCords, you are making a case
for $Person.Beliefs. What if $Person.Beliefs @ $Video.TimeStamp !=
$Person.Beliefs @ $Today? Is it true but meaningless, or are you trying to
imply conclusions contextually - but guess what, different people interpret
the same context differently!

an example suitable for HN is talking about security. Is your company secure?
You cant answer the question because _the question is bad_. the answer to
security is ALWAYS it depends. Are you talking about physically secure against
a wandering drunk trying to pee on your server, or physically secure against a
disgruntled employee building a killdozer and driving through your building?
Are you talking about secure from some kid who finds LOIC and tries to DoS you
or from a long term campaign from a nation-state APT? The discussion
_requires_ framing, and so does discussing "misinformation and extremist
content"

~~~
MadWombat
This is a bizarre line of reasoning

> his perception that there are a set of correct facts and incorrect facts is
> just so meaingless

Yes, there are facts. In your example, if I say "person X said Y" and have a
video of them saying it, I am stating a fact. I am not dealing in beliefs and
I don't give a fuck whether or not the person still supports the statements
they made or not. I am saying that at some point in time T, person X said Y
and this fact is on record.

~~~
curryst
You are correct, there are literal truths. Some person factually used some set
of words in a particular order. The part that people care about is how we
interpret those words. It being fact does not make the implication incorrect.

In an extreme example, if someone says "There are extremists in the world, and
I am not going to be one of them", I can quote that as "There are extremists
in the world, and I am ... one of them", which is factually true, that is a
portion of what was said. The interpretation is the polar opposite, but it is
factually true.

Likewise, there are facts that are misleading, but true. "The Earth is at the
center of our solar system". It's a perfectly valid statement; the rotation of
planets can be modeled where the Earth is at a fixed point and the other
planets moved around it. It's messy, and the movements look extremely erratic,
but it's perfectly valid to be modeled that way.

There are also facts that are true, given a set of circumstances. "I weigh 7
pounds". It's not true on Earth, but in the correct gravitational field, I do
weigh 7 pounds. Determining whether that statement is true depends on the
implication; was I implying that it is true on Earth or not? That depends on
the context of that statement, and on how you interpret the context.

~~~
Izkata
> In an extreme example, if someone says "There are extremists in the world,
> and I am not going to be one of them", I can quote that as "There are
> extremists in the world, and I am ... one of them", which is factually true,
> that is a portion of what was said. The interpretation is the polar
> opposite, but it is factually true.

Not so extreme considering this kind of editing is basically what led to Count
Dankula's court case and conviction.

~~~
joshuamorton
That's a reach.

Doing explicitly offensive things as satire and being selectively quoted
aren't similar.

~~~
0x4477
Why is it wrong to say explicitly offensive things and why should someone be
legally punished for it?

Who gets to decide what is and isn't offensive and to what degree?

~~~
MadWombat
> Why is it wrong to say explicitly offensive things

Because they are offensive

> and why should someone be legally punished for it

Because that is what legal systems are for. Punishing people who do things we,
as a society, find objectionable

> Who gets to decide what is and isn't offensive and to what degree?

That is a rather general question, but generally speaking, the law makers and
the legal system decide things like that.

Now, mind you, personally, I support a rather radical version of freedom of
expression. I think almost every kind of censorship is ultimately detrimental
to society, but I am not naive enough to ask "but who gets to decide" or "but
why cannot I"

------
kilo_bravo_3
My favorite thing about the "publisher" vs. "platform" rabbit hole people of a
certain political persuasion seem to be burrowing through as a not-so-veiled
threat towards service providers that "censor" posts consisting of pictures of
Michelle Obama photoshopped to look like a gorilla is the delusional
alternate-reality plane of existence on which they seem to reside where they
think that going through with the threat will mean that their preferred
content will be more likely to be hosted.

~~~
clSTophEjUdRanu
That's a bingo

------
kelnos
I think we need to stop trying to fit these things into old laws that weren't
written with them in mind.

Twitter isn't a telephone company _or_ a newspaper. I think for the most part
they should have the liability protection that a telephone company has. But
_users_ do want moderation. They often want to restrict what they see to posts
by people in their own echo chamber. They want the ability to flag things as
spam or abuse. They want to be able to block people. They want posts taken
down if enough people complain about them. And to extend that further, they
often won't mind if there's a system in place to automatically do the above
without their prior action.

The problem ends up being bias, even if it's just perceived, and not real. If
a certain group thinks "the algorithms" are suppressing their speech, then the
algorithms are either bad, or aren't transparent enough to prove that they're
unbiased.

At the end of the day, people believe that these companies have an agenda that
they push by shaping discussion in certain ways. Whether true or not, the best
way to combat that is complete transparency, or just no filtering or
reordering at all.

------
tasty_freeze
It is easy to ascribe bad motives to a person of a party of a different
affiliation and to assume this is just selective application of the law to
advance political goals.

However, there is another way to look at this different from my own political
leanings. As little effort as Democrats put into anti-trust prosecutions,
Republicans (of the past 30 years) have been anti-anti-trust. In the late 90s
when the DOJ had Microsoft on the rack, nominee Bush said he'd stop the
antitrust effort. In fact even though MS had been found in violation of
antitrust laws, then President Bush stopped the effort to break up MS and
instead they were told to make relatively minor changes in their behavior.

[https://slashdot.org/story/01/09/06/157258/Bush-
Administrati...](https://slashdot.org/story/01/09/06/157258/Bush-
Administration-Stops-Microsoft-Breakup)

So is it that the current administration finally believes there is a place for
antitrust, or is it using the law as a political tool?

~~~
Nasrudith
It is a political tool - they aren't even considering the actual monopolies. I
thought it was obvious by now with their very flexible standards.

------
acd
In Sweden they are probably liable for moderating user content. The Swedish
law is called BBS lagen, the bulletin board system law. Yes the law is a bit
old but it should regulate content published by users and that the hoster of
content has liability for the data published on the platforms.

------
sneak
Reading the following:

> _“No longer are tech companies the underdog upstarts. They have become
> titans,” Barr said at a public meeting held by the Justice Department to
> examine the future of Section 230 of the Communications Decency Act._

> _“Given this changing technological landscape, valid questions have been
> raised about whether Section 230’s broad immunity is necessary at least in
> its current form,” he said._

...all I can think of is “well, here comes the state-sponsored moat.”

If they weaken these protections, the big four will just hire a few more
entire buildings of minimum wage content moderators (like most of them already
have running) and it’s curtains for small entrants.

It makes me really sad to see the US thinking about shooting its only real
growth industry in the foot.

Edit:

> _while a few Democratic leaders have said the law allows the services to
> escape punishment for harboring misinformation and extremist content._

It’s also terrifying to think that parts of our government want to explicitly
punish people for hosting legal content that they don’t like to read.

~~~
icheishvili
FAANG will need to decide if they're platforms or publishers. They currently
moderate the communities, albeit selectively, while enjoying the protections
granted by being a platform. This can lead to abuse of power where only select
viewpoints are moderated out because unaccountable corporate leadership says
so.

It's correct to be thinking about this, notwithstanding the fact that I place
little faith in the federal government to produce the correct outcome.

~~~
azinman2
Let’s be clear. The “conservative” voices that have been moderated out not
because of the PC police, but because of obvious reasons that violate ToS
(spreading racism, inciting violence, etc). Look over [1] and show me this
consistent “abuse of power”.

[1]
[https://en.wikipedia.org/wiki/Twitter_suspensions](https://en.wikipedia.org/wiki/Twitter_suspensions)

~~~
klipt
There are a lot of misandrist "men are trash" posters on Twitter that haven't
been banned:
[https://mobile.twitter.com/hashtag/menaretrash?lang=en](https://mobile.twitter.com/hashtag/menaretrash?lang=en)

Unless they're equally blasé about "women are trash" posters, seems their
enforcement against sexism is only one sided?

~~~
azinman2
I don’t work for twitter so I can’t speak for them. Trump is still on twitter
saying all kinds of horrible things, and I can only presume many others are as
well. I’m not sure what triggers crossing a line, but I don’t agree with this
accusation of some great anti-conservative conspiracy. I see this often in
reaction to Alex Jones who spreads all kinds of false lies that have lead to
violence.

Note it’s not like there are humans evaluating each and every tweet so it’s
going to be inconsistency applied.

I also find it interesting that since posting on HN someone is trying to reset
my Facebook account. Talk about censorship...

------
WaitWaitWha
When we used to run BBSes, we were repeatedly warned by lawyers and courts
that if we start actively manage the content of other people's posts, we
become publishers and our legal protection vanishes. Why is this not the case
for large social media orgs that do exactly that?

------
nnq
Look... _if platforms become responsible for content published on them, it is
the end of free speech? Period. You want THIS?!_

The point would be to limit/regulate targeting: either (a) they're a no-login
and no-user-personalization place, and they do no targeting everyone gets a
random sample from the same content (I'd _really prefer this!_ ), or (b) it
needs to be very clear what kind of targeting is allowed... and the line gets
very blurry here, amplifying hate speech for clicks and eyeballs can probably
pay well and there need to be ways to solve this problem...

------
OrgNet
Of course they should, because they moderate content. (you can't have it both
ways... you either moderate or don't... but if you do moderate, you are
responsible for what you let through)

------
aSplash0fDerp
>with any alterations to one of the internet’s key legal frameworks likely to
draw unexpected consequences. “It’s hard to know exactly what the
ramifications might be.”

Since there is no direct bridge to the digital money, power and influence,
analog types will wreck the whole thing trying to implement legislation to
give them any kind of foothold on all of that easy profit.

The lack of influence/sway will eventually drive the traditional powers to
contrive the shortest-term solutions to destabilize the ecosystem. Its more
than a "war of words" at play.

------
baby
BTW this would probably include reddit and HN as well.

------
AnimalMuppet
Well, just to put the shoe on the other foot:

Should US AG Barr be liable for (or bound by) comments/tweets by President
Trump?

This isn't as tight a parallel as I would like. But when I make a post on HN,
say, it's _my_ words and _my_ opinion, and does not represent the opinion of
HN (even though they moderate). I don't speak for HN; they don't speak for me.

In the same way, when Trump sends his tweets-of-the-day, that doesn't speak
for AG Barr or the DOJ (despite Trump's idea that he is the chief law-
enforcement official).

As I said, that isn't quite as tight as I would like it to be. But it's
something that Barr should be able to understand at both an intellectual and
an emotional level.

------
dfischer
The solution we need is a p2p social network with user opt-in moderation
lists. The gov should be no where close to this.

------
013a
I tend to believe that the only path forward is for these "global platforms"
to become more sharded, allowing smaller, more focused communities to thrive
and self-moderate.

Platforms like Reddit, Discord, etc have "tiers" of moderation whereby
community leaders handle the day-to-day moderation of individual content
posted within the community, yet there is still Big Company Inc. at the top
capable of moderating entire communities (you can't create a subreddit focused
on school shootings, stuff like that). These platforms have problems; there
are problems intrinsic to any situation where Speech and Social Interaction is
involved. But their problems are far less in both magnitude and quantity than
the global platforms.

It seems to me that holding any organization or moderator liable for what
people post on their platform would have a Supreme Court-level case on their
hands concerning the first amendment. Who would win, I don't know, I'm not a
lawyer, but that feels like the ground we're treading on.

------
m463
There's another facet to this story.

If companies are to moderate, they must have the ability to view at the
content.

Say there's a requirement to moderate an encrypted chat client.

see where this is going? Even light moderation means they keep the data
collection going.

------
KorematsuFred
Tech, Aviation and Agriculture are some of the areas where Americans are the
world leaders by far and yet the American government is totally set to hurt
these very industries (we'' break the evil google) and so on.

This is beyond idiotic.

------
tboyd47
The Section 230 saga just shows how dangerous it is for the government to
interfere in industry.

The CDA was passed when people were scared of the internet and looked to
government to protect them from its evils. Section 230 was added to save "the
little guy" from becoming collateral damage of this legislation.

Fast forward 30 years and these "little guys" have grown into the scary forces
that everyone wants the government to protect them from!

Imagine if ordinary people had been allowed to sue Google and Facebook over
this time. There's good reason to think that no one would have been able to
monetize the internet in such a way as Google, Facebook, etc. if not for
Section 230.

I don't think anyone in Congress is interested in repealing Section 230 but
I'm glad people in Washington are at least talking about it.

------
vinniejames
No. Full stop.

~~~
slumdev
Saying "full stop" doesn't make or strengthen an argument.

~~~
Nasrudith
It says there is no need for an arguement because there is nothing to justify
it. "Should it be legal for the government to kill and harvest the organs of
underperforming schoolchildren?".

"No. Full stop." Is a stance that there is nothing to even argue.

------
acd
In Sweden they most probably are by a law called BBS lagen. This is the
bulletin board system law. Where the provider of content are to some extent
liable for the content.

------
carapace
Should Facebook, Google shield users from the legal consequences of posting
illegal posts?

If we were using e.g. Ted Nelson's Xanadu (instead of the WWW) every post and
link would have provenance information and it would be _technologically
feasible_ to make the original source of a given piece of illegal content
liable for the legal consequences of publishing it, as well as each and every
person/entity that promulgated it across the network.

As it is now, these platforms omit or delete provenance information, making it
technically impossible to moderate _at scale_.

------
jeffdavis
The protections designed for phone companies, etc., make perfect sense: the
phone company is just facilitating communication in a content-neutral way.
Phone companies should not be responsible for knowing or caring what content
is shared, even if it's some kind of slander or treasonous plot being
discussed.

But does that apply to web platforms that aren't content-neutral? I think
probably not. There is such a huge volume of communication that they should
have some protections built in, but not blanket protection.

~~~
luckylion
> Phone companies should not be responsible for knowing or caring what content
> is shared, even if it's some kind of slander or treasonous plot being
> discussed.

What if they influenced the content that each side hears, e.g. add strategic
gaps where they leave out key noises, to make the phone call go on for the
maximum amount of time (because they charge by second)?

------
admiral33
Should International Paper be liable if an extremist writes down their ideas?

Should the US postal service be liable if they mail it to their friends?

The US postal service uses dogs to find drugs in the mail, and yet we don't
charge the post master general with drug smuggling.

Any attempt to get rid of undesirable content should not then make you liable
for the content you miss. The platform vs publisher debate is silly.

------
rayvd
Obviously, whoever has money lawyers can go after should be liable...

------
shmerl
It's the same Barr who wages war on encryption.

------
mikedilger
Conservatives want platforms to moderate in a politically neutral way. Passing
a law requiring such would be unconstitutional as it would violate the free
speech of those companies. Making section 230 conditional upon political
neutrality might not be unconstitutional. No Internet platform would ever risk
operating without section 230 protections, so they would essentially be forced
into political neutrality. So the same effect would be achieved.

Nobody is seriously considering simply removing section 230; that would be
devastating to the economy and to free speech both. Any such assertions are no
more than sword rattling and idle threats.

Neither is anybody seriously talking about ceasing all moderation entirely.
Platforms would become flooded with spam, among other things making them
virtually unusable.

Where this all gets very complicated very fast, IMHO, is in how you define
political neutrality. And I'll stop here because that's much too long of a
discussion to have in a HN comment.

------
lanternslight
If they are censoring, then yes.

~~~
OrgNet
this, 100%... and label them as dangerous internet entities

------
notamanager
This is such a disingenuous framing from the AG as well from media outlets who
keep misrepresenting section 230.

That law isn't about protecting Facebook or Google it's about ensuring that
anyone can express themselves online without needing a highly paid lawyer and
a protracted trial to do so.

It also isn't about publisher vs. platform, section 230 protects the Times
from being sued for comments on their website same as for any bigger or
smaller operation.

It's tragic how the powers that be in this country are trying to insert a
lawyer into every transaction like it's a jobs program, and the infuriating
part is that they are trying to convince people that it's for their own
benefit.

~~~
rayiner
> That law isn't about protecting Facebook or Google it's about ensuring that
> anyone can express themselves online without needing a highly paid lawyer
> and a protracted trial to do so.

If you post something defamatory on Facebook, you can be sued, but under
Section 230 Facebook cannot. I happen to think that's a good arrangement, but
it's definitely about "protecting Facebook or Google," and not about
protecting the individuals "express[ing] themselves online."

~~~
heavyset_go
> _not about protecting the individuals "express[ing] themselves online."_

Who in their right mind would host content created by individuals online?
GeoCities, blogs and personal sites wouldn't exist as they did or do today.

------
fragsworth
I don't know how Barr expects to have a civil discussion about any topic in
the midst of what he did with the Roger Stone prosecution, and in the midst of
this presidency.

The public and his own Justice Department cannot have a reasonable discussion
with him, when his behavior and actions up to this point have almost all
appeared to be for one purpose - to help the President and his supporters in
criminal issues.

The question we all find ourselves asking is: "So how is this going to benefit
the President at everyone else's expense?" and even if it doesn't benefit him,
it colors the entire discussion in a bad light.

~~~
riversflow
There’s such a thing as professional compartmentalization. If you don’t like a
colleague‘s/vendor’s/customer’s/government’s behavior you can(but not
necessarily should) quarrel about _that and that only_ , when you start to act
as if a slight(however large you perceive it to be) is enough to blockade all
relations in everything you do together, the initiator looks weak. A decision
to halt all relations has far reaching impacts, if you are a business it
almost always effects your employees and could easily effect your customers.
If you are a government it effects both country’s citizens and possibly others
as well. If it is between colleagues it effects everyone below you.

In the case of Barr or Trump, we are talking about effectively shuttering
government progress/modernization. What does this serve? We don’t get these
months/years back, the beat of progress marches on regardless. It’s
unreasonable to think that we are just a few years away from a sweeping blue
tide of progressives(or a red tide of conservatives under Obama) that are
going to have the whole of congress on their side to make huge reforms, or
whatever it is that would make the USG quickly modernize. Our system of
governance stalls with sort of behavior, and it only really serves entrenched
interests, if anyone.

~~~
multiplegeorges
His actions call into question all his professional judgment, that's not
compartmentalizable.

Considering the gravity of this issue, reasonable people can conclude that
we'd be better off "wasting" these years and months and to return to the idea
later when someone with more judicial independence is in the position.

------
cs702
Sacha Baron Cohen proposed this in his widely seen/read keynote speech at the
ADL's annual summit:

[https://www.adl.org/news/article/sacha-baron-cohens-
keynote-...](https://www.adl.org/news/article/sacha-baron-cohens-keynote-
address-at-adls-2019-never-is-now-summit-on-anti-semitism)

If you haven't seen it before, I would highly recommend it -- regardless of
whether you agree with him or not.

------
m0zg
The sentiment expressed by many in this thread would flip 180 degrees if Zuck
e.g. one day woke up and decided he doesn't like commies (which would be a
very reasonable, and amply justified opinion, in my view), and had his
underlings at Facebook censor the entirety of Bernie Sanders' presidential
campaign from the network.

My position on the issue is simple: if a site owner
censors/throttles/shadowbans/detrends/etc _any_ legal speech, they're a
publisher, and and they should be liable for the stuff that remains on their
site. Don't want that? Be a carrier and don't censor legal speech. Nothing
could be easier.

~~~
NE2z2T9qi
I share the exact same sentiment as you, but I don't know if I'm being fair to
how ugly a unmoderated forum can be.

For example, spamming advertisements in comment sections is completely legal
speech. So would be typing in gibberish and hitting enter a thousand times.
But both of these things would ruin the point of the forum.

Even if I compare Hacker News to Reddit, the former is consistently high
quality while the later is 95% garbage in my opinion. Why? Probably because
Hacker News is far more highly curated.

At the same time, I feel like YouTube has crossed the line with idea
suppression. YouTube gives the impression that it's an open non-biased
platform that links you to content you're interested in. But there are several
egregious examples where popular videos with near-mainstream "conservative"
viewpoints are suppressed into oblivion (e.g. appears on the 12th page of
results even when you search for the video title verbatim and it has millions
of more views than every other "relevant" result).

But, just because I can give examples of things which cross the line
(suppressing popular conservative videos) vs things that don't (suppressing
random gibberish, suppressed bot-created videos)... I don't think I could
clearly articulate any rules to say exactly where that line is in a way that
is scalable. YouTube created YouTube... I might just have to defer to their
moderation policies while hoping another competitor comes along to challenge
them.

~~~
m0zg
Well, we did make robocalling illegal, right? We could do the same for ad spam
quite easily. IMO 99% of YouTube issues would be resolved by just not showing
comments by default. I.e. you're welcome to read and write comments if you
like, but you have to click a button first to show the comment section, and
don't have to see them otherwise. Some sites already do this.

------
cletus
The Trump administration complaining about "harboring misinformatioN". The
ironing [sic] is delicious [1].

There is no universal objective truth. Specifically there are things that
reasonable people can disagree about and the same set of facts can be used to
argue different positions. This fact is abused by the mentally challenged to
argue ridiculous positions (eg anti-vaxxers, the Moon landings are fake, that
sort of thing).

Likewise, as seen here, one side will argue those who disagree are engaging is
misinformation (and in the Trump administration's case, from the President
down there are multiple claims per day that are demonstrably false such that
no one can really keep up). The agenda is to silence the opposition and
undermine confidence in any sort of news.

ISPs were given safe harbor from liability for traffic on their network, for
good reason. They just need to comply with certain standards. Tech companies
really are no different and to argue otherwise would set an incredibly
dangerous precedent (IMHO).

[1]:
[https://www.youtube.com/watch?v=7p23mA2VV0A](https://www.youtube.com/watch?v=7p23mA2VV0A)

------
RustyBucket
If porn sites are liable for their content - FB should be too. Porn sites
managed to survive and thrive and so will FB.

~~~
vorpalhex
Define "liable". They share the same Section 230 protection as FB.

------
goatinaboat
Yes absolutely. They exercise editorial control even if they attempt to
disguise it behind “algorithms”. Everything posted on Facebook should be
treated as if it was a newspaper article, for all legal purposes.

~~~
karatestomp
Yeah, easy yes from me too. "Oh but we won't be able to make a business out of
exploiting user data and exposing hundreds of low-paid workers to
psychologically damaging material anymore" right, yes, exactly, that's the
point.

~~~
eyeinthepyramid
Won't they need to have even more low-paid workers to make sure they don't
publish something that violates the law?

~~~
karatestomp
If they're treated like any other publisher, probably having that many workers
manually vetting everything won't be viable. Which is fine, because it
_shouldn 't_ be. It confuses me how every time we have one of those posts
about how horrible that work is for the people doing it, most posters kinda
throw up their hands and go "well whatcha gonna do" when the obvious answer
is... _simply not do it_? If your business requires that, the easy answer to
how to not cause that harm is to not have the business.

~~~
eyeinthepyramid
If the survival of their company depended on it, they would absolutely figure
out a way to pre-moderate content, and almost certainly that would include
significantly more low-paid moderators.

They had something like $18 billion dollars in profit last year, you don't
think they could afford to hire an army of moderators?

~~~
karatestomp
I reckon they can afford 150,000-200,000 more moderators, moderator-managers,
workers on and managers of tools for same, et c. before that profit margin
starts getting mighty thin. Is that enough? I don't know, maybe it is. How
long before profit hit zero would investing in Facebook start to be considered
a poor use of capital, since that's the actual tipping point? Also not sure.

------
psychlops
Since Facebook and Google shape the information that is seen using a
proprietary algorithm, they have become publishers. Perhaps if their
algorithm's were open and available, they may have an argument in their
defense.

Until then, it is entirely possible they are shaping a narrative based on
whatever model they want.

I don't buy the argument made by Barr that the scale of the platform reaches a
point that it therefore requires regulation. This seems to be a simple money
grab where large tech companies need to tithe to lawmakers.

~~~
decasteve
You hit the nail on the head.

They want to shape the content and do whatever curation or editorializing via
human or algorithmic means, yet be seen an an open forum of user-generated
content. Have it and eat it too scenario.

Not to mention adding paid content that blends in in a way that is
indistinguishable from user submissions. This further complicates the
intentions of the platform.

~~~
d1zzy
Yes, they most definitely want the cake and eat it too but there are lots of
good arguments to make for that, arguments that have nothing to do with
sustaining the business model of multi-billion dollar Internet companies.

------
fareesh
If these companies are treated as a "public square" then the first amendment
ought to apply. It's disappointing to see enlightened ideas like free speech
being taken apart by these large corporations to push what seems to be a
political agenda.

Recent example - a female Nascar driver shares a selfie with the President of
the United States and Twitter's algorithms flag it as sensitive content. When
algorithms make mistakes that lead to race based discrimination, it's treated
extremely seriously. When this sort of thing happens it seems like everyone
shakes their head and chuckles "oh those silly algorithms". Outcomes that
marginalize folks based on political views are dangerous for your country. The
shoe will be on the other foot someday.

~~~
scarejunba
Can’t wait for the end of flagging and modkilling on HN to preserve First
Amendment rights. This is going to be good.

~~~
fareesh
I'm making the argument that the authority to moderate discussions is being
misused and you are replying as if I am suggesting removal of this authority.

~~~
scarejunba
I'm riffing off what you're saying, not attempting to counter it.

------
drannex
Companies are liable for their employees.

Employees produce content/products/sales/projects for the Company.

Social media users create the content that give value to social networks, thus
social media users are, in a way, employees of the company. The Company has
the requirements of limiting and being liable for the content that exists on
their platform.

