
230, or not 230? That is the EARN IT question - jbegley
https://signal.org/blog/earn-it/
======
reggieband
I thought it was interesting when Twitch partners started talking about a
Twitch policy that seems to hold the partner responsible for moderating their
own chat. That is, if you are a partner and you have community members posting
prohibited content into your Twitch chat then you stand to pay the penalty
through a ban or losing your partnership. You are forced to moderate your own
chat thereby relieving Twitch of having to do so (and presumably giving them
some plausible argument that they are enforcing some level of site wide
moderation).

It was interesting to see the reactions of these streamers since they aren't
typical business people or legal experts. There was quite some debate amongst
them about the fairness of the streamer being held responsible for random
trolls that entered their chats. When I considered the viewpoint of
individuals instead of corporations it did expand my view of
responsibility/accountability.

~~~
AnthonyMouse
Indeed. The problem with imposing liability on moderators is that they're
_already_ doing about as well as they reasonably can at a job that isn't easy.
Nobody wants a platform full of spam and disinformation.

But it's inherently a difficult trade off between heavy-handed censorship that
catches too many dolphins in the shark net vs. not catching strictly 100% of
the bad stuff. If you start imposing liability on the moderators then it
forces the trade off into an all or nothing -- either they give up and stop
moderating whatsoever or they have to murder all the dolphins because now a
single shark sighting puts them out of business and you can't always tell the
difference.

It also eliminates the possibility for different platforms to experiment with
making the trade off in different ways. Maybe The New York Times wants to have
an editor read every user comment before posting it but Reddit has a stronger
commitment to free speech. Shouldn't we have both and let the different
readers make their choices? Isn't that better than locking in the same
compromised criteria for everybody?

~~~
admiral33
I think it's simple. The US postal service uses postal inspectors to try and
identify packages containing narcotics[0], and yet we don't make them liable
for the packages they miss. Any attempt to moderate undesirable content on a
website should not then make you liable for the content you miss.

[0]: [https://www.uspis.gov/about/what-we-
do/](https://www.uspis.gov/about/what-we-do/)

~~~
vageli
> I think it's simple. The US postal service uses postal inspectors to try and
> identify packages containing narcotics[0], and yet we don't make them liable
> for the packages they miss. Any attempt to moderate undesirable content on a
> website should not then make you liable for the content you miss.

> [0]: [https://www.uspis.gov/about/what-we-
> do/](https://www.uspis.gov/about/what-we-do/)

Then what about sites that would use this to their advantage and half ass the
whole thing? (fwiw I'm with you, just entertaining the argument)

How do you differentiate between ignorance and malice in response? I guess via
emails and memos that would come up in discovery?

~~~
whatshisface
If you're ignorant or malicious the same thing happens: the government kindly
informs you that someone is using your website to break the law, and then you
can either do something or become clearly guilty of knowingly supporting them.
(The government knows you know because they know they told you.)

~~~
jacobush
They fix that case swiftly, then go on to ignore new cases?

~~~
whatshisface
Law enforcement, typically tasked with enforcing laws, would not ignore the
new cases. Individual citizens enforcing laws themselves is nice but overall
we don't depend on vigilantes.

~~~
ethbro
Parent presumably means if the person in the moderator position ignores new
cases.

To which it seems like a fair response would rely more on free speech
perspectives.

Essentially, can it be said that the majority of the platform is used for
illegal activities? Or are the illegal activities a minority of some other,
lawful activities?

I'd want a system where Reddit is in the clear, but CreditCardSkimmersChat is
not.

~~~
AnthonyMouse
> Parent presumably means if the person in the moderator position ignores new
> cases.

Which is what they're supposed to do. They're not the police, the police are
the police.

> Essentially, can it be said that the majority of the platform is used for
> illegal activities? Or are the illegal activities a minority of some other,
> lawful activities?

This doesn't work because it ends up prohibiting the lawful things anyway. You
have a platform that promises witch hunts and the innocent victims of witch
hunts get evicted from there, and you have a platform that promises no witch
hunts and those people go there but so do all the witches and then you get
calls to shut it down because there are many witches. You're left with nowhere
for the innocent victims of witch hunts to go.

> I'd want a system where Reddit is in the clear, but CreditCardSkimmersChat
> is not.

They tried this with SESTA and it was a monumental failure. It turns out when
you do this, CreditCardSkimmersChat.com goes away and is replaced with
CreditCardSkimmersChat.ru and all that does is make law enforcement's job
harder.

You want CreditCardSkimmersChat.com to carry on existing, because that's the
place you send your agents to camp out in the chat with a logger and execute a
warrant to have their ISP capture all their traffic, and investigate anybody
who shows up from your jurisdiction, and collect stolen credit card numbers to
report to the credit card companies before they can be used for fraudulent
purchases.

You don't want to shut it down because once you know it exists it's a honeypot
with a previous reputation for not being a honeypot, and all shutting it down
will do is cause it to reappear in Russia or on Tor or at a new site you
haven't rediscovered yet meanwhile lots more credit card fraud is happening,
which only makes law enforcement's job harder and less effective.

It's like finally discovering the phone number of the crime boss and calling
the phone company and ordering them to disconnect their phone.

~~~
ethbro
_> You don't want to shut it down because once you know it exists it's a
honeypot_

That's supposing law enforcement has viable ways to leverage knowledge of its
existence into investigation and prosecution.

In a future where we have end-to-end encryption widely deployed, this isn't a
trivial ask.

Criminals are lazy, sure. But write laws for the future we want, not the
easiest case scenario.

~~~
AnthonyMouse
> That's supposing law enforcement has viable ways to leverage knowledge of
> its existence into investigation and prosecution.

Which they do, because that's their job and they do it all day long.

We're somehow talking about both end to end encryption and moderated public
forums, but those are two different things. If you have a public forum where
anybody on the forum can read the messages then anybody on the forum can read
the messages -- including law enforcement. So they join the forum and start
investigating in all the usual ways, and get a warrant to have the ISP
upstream from the site start logging its traffic so they can start locating
the users and getting warrants to bug their homes etc.

When you have end to end encrypted communications, the only people who have
the message are the sender and the receiver. Then there is no dedicated
CreditCardSkimmersChat site, they can use any generic secure messaging
software for that. But this isn't any more difficult for law enforcement than
criminals who communicate in person -- you still have to come by some
reasonable suspicion of them to begin with somehow, and once you have you get
a warrant to install bugs, which overcomes end to end encryption because then
you're collecting data at one of the endpoints.

~~~
ethbro
> Which they do, because that's their job and they do it all day long.

With varying degrees of success. Let's not pretend an encrypted-everywhere
world (which includes e2e, onion routing, and other options) opens up like an
oyster at the first sign of a warrant.

What would your opinion be if CreditCardSkimmersChat advertised itself as a
place to meet other people involved in credit card skimming? With all
conversation taken to e2e chats as soon as two people were introduced

Not constructing a pathological case here. Am really interested in how folks
feel about the social responsibilities of serious encryption.

------
hawkice
Plenty of people have complex opinions about 230, but it's a law that says, if
you see a comment that defames you, sue the person who made it, it's got
nothing to do with e.g. whoever runs the skateboarding forum. Who opposes
this? It's just codifying the common sense understanding of the internet.

~~~
core-questions
OK, so please do tell me how to sue `sk8rboy2020` on the forum then?

~~~
Macha
How do you sue the guy that shouted at you as they left the restaurant? Who
stuck a defamatory sign on the electric pole?

I don't see why the issues these present should move responsibility to the
restaurant or electric company, however.

~~~
Hnrobert42
A better analogy is a community bulletin board, like at a library.

Just thinking aloud, what would I expect if my library’s bb was always covered
in hate speech? Probably that the librarian would put it behind locked glass
and moderate posts. Or take it down altogether. I’d hope for the former.

~~~
nybble41
If you put it behind locked glass, people are going to stop posting much of
anything on that bulletin board. And dealing with the rest is going to take up
too much of the librarian's time, distracting from other duties. It's a
reasonable approach for official communications from the library, but not for
a community forum.

------
aorth
I used the EFF form linked in the blog post to contact my representatives in
California. I will also donate to EFF again. Privacy is important.

~~~
ohazi
I did the same a few weeks ago. Here's the automated response from Feinstein's
office:

> Dear [Name]:

> Thank you for writing to me to share your concerns about law enforcement
> access to encrypted communications. I appreciate the time you took to write,
> and I welcome the opportunity to respond.

> I understand you are opposed to the “Eliminating Abusive and Rampant Neglect
> of Interactive Technologies (EARN IT) Act of 2020” (S. 3398), which I
> introduced with Senators Lindsey Graham (R-SC), Richard Blumenthal (D-CT),
> and Josh Hawley (R-MO) on March 5, 2020. You may be interested to know that
> the Senate Judiciary Committee—of which I am Ranking Member—held a hearing
> on the “EARN IT Act” on March 11, 2020. If you would like to watch the full
> hearing or read the testimonies given by the hearing witnesses, I encourage
> you to visit the following website:
> [https://sen.gov/53RV](https://sen.gov/53RV).

> The “EARN IT Act” would establish a National Commission on Online Sexual
> Exploitation Prevention to recommend best practices for companies to
> identify and report child sexual abuse material. Companies that implement
> these, or substantially similar, best practices would not be liable for any
> child sexual abuse materials that may still be found on their platforms.
> Companies that fail to meet these requirements, or fail to take other
> reasonable measures, would lose their liability protection.

> Child abuse is one of the most heinous crimes, which is why I was deeply
> disturbed by recent reporting by The New York Times about the nearly 70
> million online photos and videos of child sexual abuse that were reported by
> technology companies last year. It is a federal crime to possesses,
> distribute, or produce pictures of sexually explicit conduct with minors,
> and technology companies are required to report and remove these images on
> their platforms. Media reports, however, make it clear that current federal
> enforcement measures are insufficient and that we must do more to protect
> children from sexual exploitation.

> Please know that I believe we must strike an appropriate balance between
> personal privacy and public safety. It is helpful for me to hear your
> perspective on this issue, and I will be mindful of your opposition to the
> “EARN IT Act” as the Senate continues to debate proposals to address child
> sexual exploitation.

> Once again, thank you for writing. Should you have any other questions or
> comments, please call my Washington, D.C. office at (202) 224-3841 or visit
> my website at feinstein.senate.gov. You can also follow me online at
> YouTube, Facebook and Twitter, and you can sign up for my email newsletter
> at feinstein.senate.gov/newsletter.

> Best regards.

> Sincerely yours,

> Dianne Feinstein

> United States Senator

I don't know why I still bother.

~~~
GordonS
A typical canned response of the form "Fuck you... think of the children!".

------
GlTChWhISKY
Why is legislation like this being presented.. Its simple.. Less people are
looking. Of course this bill is intended to do what we all want. The writing
is on the walls.

To many people sit around in their passive lifestyle and pass the care to
someone else... Doesnt matter who just as long as they dont need to get off
the couch and stop scrolling fakebook.

This passes... Youll see 1984 realllll soon... I wish more people would start
taking shit seriously. The gov. Has been hijacked .. We all screwed.. Period..
Stop kidding yourself.

~~~
godelski
The premise of 1984 was that the government COULD watch what you were doing at
any given time. In certain respects we're already past that and technologies
like Signal are trying to move us out of that Orwellian world.

------
alec_kendall
> At a high level, what the bill proposes is a system where companies have to
> earn Section 230 protection by following a set of designed-by-committee
> “best practices” that are extraordinarily unlikely to allow end-to-end
> encryption.

As diligently stated by Signal, EARN IT makes end-to-end encryption difficult,
but not impossible. All relevant companies would like to prevent having to
transition their current architecture over to a design that fits the
specification laid out by EARN IT. This is quite understandable as this would
bring with it a heavy cost, but if push comes to shove, that’s what they’re
going to have to do. I expect that we’ll be hearing a lot more about this
issue over the coming months. If this bill is passed, it will quickly be
challenged in the Supreme Court.

~~~
yuliyp
So the "best practices" under EARN IT are to be made by a committee of law
enforcement agencies, with no congressional oversight.

~~~
alec_kendall
What's your point?

~~~
yuliyp
Sorry, I could have stated this clearer. The point is that it basically gives
law enforcement an extremely broad hammer for forcing service providers to
design their systems however they want to help law enforcement, over their
users. It would in practice make end-to-end encryption impossible to
implement, not just difficult.

~~~
alec_kendall
I see what you mean. That made me wonder what type of approach they would take
for something that can vary so much and here’s what I found:

“EARN IT works by revoking a type of liability called Section 230 that makes
it possible for providers to operate on the Internet, by preventing the
provider for being held responsible for what their customers do on a platform
like Facebook. The new bill would make it financially impossible for providers
like WhatsApp and Apple to operate services unless they conduct “best
practices” for scanning their systems for CSAM.

Since there are no “best practices” in existence, and the techniques for doing
this while preserving privacy are completely unknown, the bill creates a
government-appointed committee that will tell technology providers what
technology they have to use. The specific nature of the committee is byzantine
and described within the bill itself. Needless to say, the makeup of the
committee, which can include as few as zero data security experts, ensures
that end-to-end encryption will almost certainly not be considered a best
practice.”

It seems that it would be in the best financial interests of large tech
companies to try and revoke the bill if it’s passed. This is why I believe it
will quickly be brought to the Supreme Court.

[0] [https://blog.cryptographyengineering.com/2020/03/06/earn-
it-...](https://blog.cryptographyengineering.com/2020/03/06/earn-it-is-an-
attack-on-encryption/)

------
Whatitat90
This only illustrates the point that centralizing server infrastructure and
operations in one jurisdiction can quickly backfire.

------
ocdtrekkie
EARN IT is pretty disingenuous in how it is designed, of course, but I am all
for making it harder and harder to retain Section 230 immunity: It's a mistake
that we allow it in the first place.

We should indeed continue to erode the eligibility for Section 230 to the
point that either the limitations of remaining eligible for immunity makes it
easy for competitors to produce better offerings without immunity, or that
these companies accept legal responsibility for their actions as a cost to
doing business the way they want to. Perhaps this is a vehicle upon which we
gradually sunset immunity-reliant platforms.

Section 230's supporters constantly push hilariously insane narratives about
it's importance, suggesting that without it companies would be inherently
violating the law any time one of their users violated the law, or that taking
reasonable measures to prevent platform abuse is "impossible" at the scale Big
Tech operates at.

It's more than past time that we regulate tech companies and hold them
responsible for massive abuses permitted by their platforms just as we
regulate every other sector of business.

~~~
ori_b
Do you think that any blog with a comments section should be legally
responsible for spammers posting on it?

~~~
ocdtrekkie
No, and they wouldn't be by any informed understanding of the law. That's not
how the law has ever worked in any developed society.

Generally, law has both the concept of intent and reasonableness. As such, a
company that inadequately polices malicious and abusive content because that
content is wildly profitable (hi Google and Facebook), we should have the
legal ability to fine these companies into oblivion, because their behavior is
not reasonable and the intent behind it can be divined from their records.

Meanwhile, if you an individual with a blog, see someone making a bad comment
on your blog and you ban the person, the law would recognize that as a pretty
reasonable moderation practice.

~~~
danShumway
> No, and they wouldn't be by any informed understanding of the law.

You are misinformed about the history of 230. 230 was proposed exactly because
the law was interpreted the way you're saying it wouldn't be.

From Wikipedia below, added emphasis mine:

> This concern was raised by legal challenges against CompuServe and Prodigy,
> early service providers at this time. CompuServe stated they would not
> attempt to regulate what users posted on their services, while Prodigy had
> employed a team of moderators to validate content. Both faced legal
> challenges related to content posted by their users. In Cubby, Inc. v.
> CompuServe Inc., _CompuServe was found not be at fault_ as, by its stance as
> allowing all content to go unmoderated, _it was a distributor and thus not
> liable for libelous content_ posted by users. However, Stratton Oakmont,
> Inc. v. Prodigy Services Co. found that _as Prodigy had taken an editorial
> role with regard to customer content, it was a publisher and legally
> responsible for libel committed by customers._

> [...]

> United States Representative Christopher Cox (R-CA) had read an article
> about the two cases and felt the decisions were backwards. _" It struck me
> that if that rule was going to take hold then the internet would become the
> Wild West and nobody would have any incentive to keep the internet civil"_,
> Cox stated.

\---

It's become increasingly popular for people to say that Section 230 was a
mistake. Usually they support that with claims that concerns about its repeal
are purely theoretical fearmongering, despite the fact that we literally have
case president on the books right now about what the Internet would look like
without Section 230, and how the existing laws were being interpreted.

When people raise concerns that without Section 230 the Internet would be
divided up into completely unmoderated platforms and aggressively curated
gatekeepers, that's not fearmongering. It's history.

Ironically, the only websites that wouldn't be affected by a repeal of Section
230 are the completely unmoderated hellholes we want to discourage online,
because they have Compuserve's precedent and the 1st Ammendment to hide
behind.

~~~
ocdtrekkie
But in a world where we feel it was backwards that moderators were punished
and unmoderated platforms weren't... Congress decided "let's just make
everyone immune" was the right way to go?

And again, I think the examples here are missing the same concept that Section
230 fails to recognize: Profit, as I discussed here:
[https://news.ycombinator.com/item?id=22816016](https://news.ycombinator.com/item?id=22816016)
It seems like the author of Section 230 failed to recognize we're in a
capitalist society when this regulation was drafted.

When platforms are taking a cut out of illegal activity, as Big Tech platforms
do when they operate ad networks, courts would have to agree that any platform
party, regardless of whether or not they currently moderate, should be held to
some manner of responsibility.

Right now, when an old lady clicks a Google search result for "mapquest",
clicks the top link for "Maps Quest"[0] because Google ads aren't
distinguishable from real search results to the untrained eye, is pushed to
install a browser extension (from the Chrome Web Store) that hijacks her
browser's new tab and search, injects malicious ads, and scrapes her private
info to relay to an attacker, Google makes money. And is wholly protected by
Section 230 for that activity and unable to be held responsible for refusing
to delist the malicious ad.

In what world is that the right legal position?

[0] (This is a very real world example, I've done a lot of senior citizen tech
support, and this is how 90% of them get owned.)

~~~
IAmEveryone
That example has absolutely nothing to do with Sec 230. Google’s ad design is
all on Google. If it were illegal, Sec 230 wouldn’t protect them. And while
Google might be protected against liability for Mapquest’s business practices,
Mapquest isn’t. If their behavior is harmful and illegal, they are liable.

~~~
ocdtrekkie
MapQuest did nothing wrong in this example. The problem is the fake sites that
are taking the top spot in search results above the legitimate MapQuest link
when you search Google for MapQuest, and Google refuses to delist them. And of
course, Google lets people buy ads for other companies' trademarks, which is a
whole different ball of issues.

(MapQuest is a popular one for malicious sites to pretend to be because most
of the people searching for it are seniors... they heard about it twenty years
ago and then never moved on from searching for it when they want directions
somewhere.)

~~~
spaced-out
What does this have to do with Section 230?

~~~
ocdtrekkie
The moment someone points out Google makes a huge amount of money on scams and
malware, and due to Section 230, can't really be held responsible for it.

~~~
bryan_w
Fining Google doesn't help the problem there, you would want to work with
Google to find out who made the deceptive ad and deal with them so they can't
continue on to hurt more people

~~~
ocdtrekkie
Google knows who they are: They're business partners.

