Hacker News new | past | comments | ask | show | jobs | submit login
There is no bottom when it comes to Section 230 reform proposals (ericgoldman.org)
97 points by amadeuspagel on Oct 27, 2021 | hide | past | favorite | 75 comments



Good write-up. What I find most frightening is this is yet another attempt to expand the concept of harm in our laws.

"such recommendation materially contributed to a physical or severe emotional injury to any person."

What specifically is an 'emotional injury'? What's the legal bar for that? Does one merely need to say "I experienced trauma from reading that" and that flies? Or do I just need to go to a sympathetic doctor who can say I'm experiencing anxiety symptoms?

Any lawyers here have color on how such loose ambiguous language around subjective harms will (or currently) get handled by the courts?


> emotional injury

Emotional injury isn't a new concept in law. Such injury has been around probably since the dawn of legal systems. For example, you (broadly) cannot harass or slander people, even if it doesn't cause physical or financial harm.


I disagree with the term broadly. Slander and harassment really have very specific definitions. Social media mobs and the news are rarely, if ever, held to full (or any) consequences. When the news is punished for slander it's usually very specific things that they've misstated. They can dance around things and paint pictures all they want, as long as they don't directly claim something happened that didn't.


Specific law applies to journalists, under the US First Amendment which guarantees freedom of the press. The rules established by the US Supreme Court in NY Times v Sullivan (1964) apply different standards based on whether the subject is a public figure. For example, you can print outright lies about President Biden. You must be very careful regarding a non-public private citizen (not a celebrity, or example).

> Slander and harassment really have very specific definitions.

Yes, I know. I was in a hurry and didn't come up with something better. I'm just pointing out that emotional injuries are well-established. There are plenty more, such as emotional distress in damages.

> Social media mobs ... are rarely, if ever, held to full (or any) consequences.

I think that is, in part, because the legal system hasn't caught up with reality.


Both harassment and slander have very specific legal definitions and likely aren't as broadly applicable as you're thinking though. Slander, specifically, does often have a financial harm component to it as well, though it isn't required.


Agreed; I was just hurrying to provide a well-established example of injuries in the law that involve emotions.


Usually "mere words" are not enough to be considered an assault unless a direct present threat is made.


>What specifically is an 'emotional injury'?

I'm instantly reminded of an at least 3 year trauma time when my Ex cheated on me and added more insult to injury. Emotional injury is real. I felt a pain as if someone has beaten the crap out of me and it was present all day every day from a few seconds after waking up to falling asleep. I felt as if I was beaten up so bad that if it was physical damage I would've been put in hospital.

That is real emotional injury and should be punishable by law. And if I'd have an educated guess it's also why some men beat up their cheating ex-partners or the third party, commit murder, suicide etc.

And reading some text can make me infuriated but not injure. I can only be injured by text if I'm already in an emotionally vulnerable state, see the cheating ex example.

Being bullied or harrassed is a different topic.


> my Ex cheated on me and added more insult to injury... That is real emotional injury and should be punishable by law.

While I'm sorry that you had a bad experience, it is a phenomenally bad idea to make failed personal relationships into criminal offenses. This is like some nightmare straight out of 1700s puritanism.


> What specifically is an 'emotional injury'?

I think it’s also important to think about what it’s not. It’s not something that can happen to a company, so no one with deep pockets will be able to go after big tech. It’ll open up the potential for people to sue, but that’s just a matter of throwing a tiny amount of money around around if anyone is dumb enough to think they can win a lawsuit against a big tech company.


Interesting perspective and point.

Is there a potential risk there for companies with class action suits? Or will the arbitration clauses/class action waivers found in terms of services prevent that from happening?


>Any lawyers here have color on how such loose ambiguous language around subjective harms will (or currently) get handled by the courts?

not a lawyer, but have taken a number of law courses focused on civil law and worked on legal platforms. I suppose it would be handled the same way such concepts have been handled in tort cases for a long time in common law systems.


The law generally defers to experts with recognized training and experience in the relevant field. https://en.m.wikipedia.org/wiki/Daubert_standard


God. This place is toxic lately.


One hop to a kangaroo court, then?


How does a court, with a judge appointed and confirmed by elected representatives, a jury of independent citizens, utilizing expert testimony and long-established rules of procedure, equate with a kangaroo court?


For one thing, a law like this puts a staggering amount of power in the hands of whoever gets to define "emotional injury."


Emotional distress has been a concept in tort law for decades and the sky has yet to fall. It's generally measured as a combination of 'here's the awful thing that happened, imagine how you'd feel, jurors' and testimony from the victim and/or psychiatrist(s) assessing the impact of the emotional injury in terms of subsequent dysfunction.


Tort law is one thing. The US Code is another.


I can see this going the way of bitemark analysis.


> This would dial the entire Internet ecosystem back to look a lot more like the 1990s.

That would be wonderful. And while facebook could no longer aggregate and weaponize feeds from your friends against you, a browser or a downloadable RSS feed viewer could use any algorithm to fetch and sort the feeds of your friends, whether they are on facebook or on their blogs or anywhere. Which sounds to me the way the internet should have evolved, with personal computers instead of dumb terminals.


I think "This would dial the entire Internet ecosystem back to look a lot more like the 1990s" probably understates it, to be honest. In the 1990s, heavy USENET users, say, would generally use tools which highlighted posts from users they were interested in, downranked or hid posts from users they were specifically _not_ interested in, and similar. This made USENET usable (for a while).

NNTP clients generally worked by just downloading the whole group, so this could be done client side. With the scale of modern internet services, however, you're not going to be downloading the whole day's twitter content and doing client-side filtering, so you'd need to provide your filtering list to the provider. At that point, you have a friends list, and on a naive reading this might be a problem.


I would just download headers and only pull the body for content I am interested in. Usenet doesn't have the same hierarchy as Twitter. One would not have to download all of twitter, just the sub-groups they are subscribed to. One could also filter what groups they want to pull down.

  alt.something.twitter.text-only.emotional-support-animals.fluffy-bunnies
  alt.something.twitter.text-only.checkmarked.public-figures
  alt.something.twitter.bin.sfw.memes
The audience could optionally use some add-on in the news clients to validate things like twitter identities, validate checkmarks, etc... Another add-on to filter out what Twitter considers spam if you were so inclined, or maybe subscribe to a crowd-sourced spam filter. These addons could pull signed data from each respective usenet group. One could probably go as far as to make a Twitter Client that uses Usenet behind the scenes and communicates through a HTTPS gateway for VIP to avoid people needing firewall rules for NNTPS in corporate environments.


Maybe, but that is losing sight of the most likely outcome of the government rolling in to 'reform' whatever it is they decide needs it. It doesn't matter what the zeitgeist is going in. The likely outcome is this attention will to end up entrenching the current big players and cutting off avenues of competition.


How?


I dunno there are lots of ways to get to the same place. Maybe send in lots of expensive lobbiests, make sure the law is vague and complicated, then after it is implemented small social media companies can't legally form because they can't hire enough lawyers/provide better services. And the big companies reveal some sort of loophole that formed in the complexity that lets them keep making money with only minor tweaks to their business. The politicians throw up their hands and blame the other politicians/the 1%/the unvaccinated/immigrants/young people/rap music/whatever for why the mess can't be worked out.

The problem around sec. 230 is that large groups of people disagree. Trying to solve that with law is going to be ugly. The politicians aren't going to roll in and make a useful contribution to how algorithms work technically, or civility on the internet. That is hard enough to do with years of experience working with algorithms - without having to run the lobbyist & special interest gauntlet like legislation does.


I think the existence of 230 has always rubbed many people off in the wrong way. Newspapers had reader-contributed columns forever, but they were never given as much immunity as internet services get. But because the internet was exciting people never questioned it. 230 itself is a loophole . I dont know how it should change either, but there should be a broader debate.


Agree. Though you wouldn’t be able to read comments on hackernews.


Except HN isn’t recommending comments, is it? I thought it’s mostly based on popularity and users have the majority of the influence for what bubbles up. Is that wrong?

HN isn’t interfering with user generated rankings, right? That’s still a platform IMO.

I’m sure any regulation changes will benefit big tech. Facebook will have their politically connected staff writing half the bill so it’s burdensome for anyone small (aka competitors), but tolerable for themselves.


> Except HN isn’t recommending comments, is it?

HN is a recommendations engine and the front page is similar to a feed. HN has user-generated content and comments. HN is definitely a social media platform and would be covered under any such laws.

HN has algorithms to determine which stories and comments are shown and in which order. The primary inputs are upvotes, downvotes, and flags, but it’s not a transparent or linear algorithm. Moderators (or single moderator, I’m not sure) have influence to override algorithm decisions like buried stories and force them to be shown again.

Which is not really that much different than what Facebook and other sites are doing with likes, follows, and views in their own algorithms. If HN exceeded the unique monthly visitor threshold of this bill, it would absolutely be impacted.

HN even has a feature that goes a step further than Facebook: The site allows privileged posters (YC companies) to insert special stories for hiring requests that don’t allow comments and, as far as I can tell, get some special priority in the front page rankings without necessarily requiring user upvotes. If someone decided to argue that such a job posting caused them “emotional harm” under this proposed law, they could have some standing.


> HN isn’t interfering with user generated rankings, right?

/u/dang will absolutely step in where necessary, to flag a post, split a thread off, or warn/ban people.


Yeah, but since it's not an algorithm doing it, and it's not personalized per user, I don't think the law applies


That's completely irrelevant to the current law.


why not? even assuming that the ranking is removed it doesnt change much. I can't be the only one who reads all the comments. Plus it there is this phenomenon where everyone piles up on the top-ranking comment thread and it becomes enormous


I don't follow your response to cm2187. He means very literally that you wouldn't be able to read comments on hacker news, because without section 230 it would be legally very dangerous for sites to host user-generated content, such as comments.


Yet they did host comments, if not exactly in 1990 in hacker news form, then in many years before section 230 and the DMCA. Lawsuits happened, but they were de facto not very effective in getting information taken down (regardless of what they have since been ruled as having been entitled to).


>Yet they did host comments, if not exactly in 1990 in hacker news form, then in many years before section 230 [...] Lawsuits happened, but they were de facto not very effective in getting information taken down

But lawsuits still cost defendants millions in expensive attorneys' fees to fight them in court. E.g. Oakmont's $200 million lawsuit against Prodigy. [1][2]

So it's missing the point if people only emphasize that 1995 Prodigy didn't remove an anonymous user's comment about Oakmont. In other words, leaving the dangerous comment up isn't really "winning". Instead, it's not having to deal with lawsuits at all from future "Oakmonts" that Section 230 clarified.

So would HN exist if Paul Graham had to deal with repeated $200 million lawsuits because anonymous posters kept writing "Facebook is a cancer on society..." ? It doesn't seem worth the liability risk without Section 230.

[1] https://itif.org/publications/2021/02/22/overview-section-23...

[2] https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....


Section 230 goes back to 1996. There would have been very few websites with comments in 1996 (mainstream web browsers, insofar as there was such a thing as a mainstream web browser in 1996, had only in the last couple of years got the ability to submit HTTP forms). And there were very few users interacting with that internet; only on the order of 100 million, and most of those _very_ infrequently. The scale of the problem is entirely different now.


I mean that they'd switch to chronological mode

They 'd have to moderate somehow. In this new world, if it is OK to force porn sites to get the ID card of everyone who posts videos, then HN should also be able to moderate user submissions.


If my reading of the bill is correct, it would expose any SaaS website to liability. What does Gmail do if not promoting content that’s specific to an individual?


I find this article unconvincing, in that it looks at this issue through the usual bias and superiority complex of HN folks.

Politicians no longer want any company to be able to mass manipulate their constituents. The fact that operating a platform at scale relying on user submitted content would be impossible with this bill is precisely the outcome being sought. This requires tech companies to either change their business model or take editorial responsibility for the content promoted on their platform.

I actually find this bill clever and elegant since it excludes search engines, discussion boards and more. Yes certain business models and startup dreams relying on making algorithmic recommendations without being accountable for the outcomes on individuals and society will die, but is this really a bad thing?

To me this is a total positive, there are still plenty of opportunities to build large scale platforms with the web. Simply moving forward, this will require bringing more value to the table rather than just putting people in a walled garden and making money by poisoning their mind and society at the same time.


> I actually find this bill clever and elegant since it excludes search engines, discussion boards and more.

If you give a specific exemption for narrow classes of existing things, you're crushing all of the un-thought-of things that innovation would create. There's no reasonable boundary that permits search engines and discussion boards but disallows "algorithmic recommendation systems", because they're the same thing.


I am not so sure. Laws are defined to be as limited in scope as possible.

When cars were getting faster in the 20th century, we made speed limits. This didn’t hinder building faster cars, just that you needed to start speeding on race tracks and rallies.

At some point we must define whats more harmful than beneficial. Ads driven models with algorithmic recommendation systems are not very beneficial, or so it seems.


We limited speed on public roads. We didn't make a blanket ban on going fast and then give specific exceptions for race tracks as they existed at the time. (If we had, rallying would probably never have been invented).


Well, I agree politicians find it annoying that a company can mass manipulate their constituents. However, they are perfectly fine if that same platform lets THEM mass manipulate their constituents. This isn't about manipulation, it is about control and power.

I think people would hate an internet without 230. Everyone is just acting like a kid tattling in kindergarten to get their way. The thing that gets lost is that Hatebook, etc. are mass manipulating because it is insanely profitable. Make it less profitable and the algorithm will change. This is why I feel the solution is a tax on digital advertising revenue. People will never choose paid over free w/ ads at scale and rules like this could offer users a more pragmatic choice.


> this requires tech companies to either change their business model or take editorial responsibility

So youre for the end of Youtube, et al.?


Why would YouTube need to end? You could still search for videos, subscribe to channels, share them, etc. You'd just lose the sidebar.


It's user-generated content. As the article explains, since all sites use some identifying information (eg., location) -- all a person needs to do is *claim* some emotional harm for "203 summary dismissal" not to apply.

Ie., This opens every site up to massive amounts of litigation.


No, not all sites use identifying information to "enhance the prominence of information with respect to other information". YouTube doesn't have to use my location to decide which videos do show. In fact, I'd prefer it didn't.


Actually YouTube will be able to survive, for example by opening up their platform and API to allow users to bring their own discovery and recommendation engines. An ecosystem of SaaS enabling users to no-code build their own engine, or one click-install open-source ones will here be enabled. And since the goal of users is not to maximize engagement but to protect their health and find interesting content, this will not lead us back to the current state of affairs. Very few people will choose toxic algorithms, and at the very least people will now have choice.

YouTube would still be able to sell ads, but it will be the dumb pipe it was always supposed to be under section 230. This will be a much healthier situation with many regards.

I think the key thing is to stop considering that the current status quo is the golden age: there is so much abuse everywhere from misinformation and manipulation to wall gardens and monopolies, but somehow we've rationalized it and accepted it as a fact of life. What this law will do, is introduce some much needed rule of law in the digital landscape, and I believe create a healthier and even more competitive landscape.


I guess this is then effectively a breakup. You separate Youtube the UGC platform from Youtube the recommendation system.

Presumably, then, the "recommendation company" does not need to take editorial responsibility because they aren't accepting the UGC.

Don't we end-up back at status-quo? Why don't your SaaS companies just become the places where these status-quo issues reoccur?


Justice Against Malicious Algorithms Act

I thought the author was being sarcastic with that. As it turns out? Nope. That's the title of the bill.


Just be grateful it's not (yet) the Protecting Children Against Malicious Algorithms and Digital Terrorism Act.


It'll never be called that because American acts are named in such a way that when they're abbreviated they will make a pronounceable word.


While I'll agree that it's pronounceable, is JAMAA a word?


Jama(ica) Act.


It's hard to see how an algorithm could be judged "malicious".

Can an algorithm be "well-meaning"? Is there such a thing as a "kind" algorithm? What kind of attitude does the Sieve of Eratosthenes have?


Sure, because facebook, goggle, twitter, etc. are all run by the almighty algorithm and not by the crooks behind it who made it to do exactly what they expect.


So make a law that says doing crooked things is forbidden (oh - hang on).


Two naive thoughts:

1) What are the rules for non-US sites that allow access to US based IP addresses? Does that count/is that enforceable?

2) I'd find it fairly amusing if sites with UGC would "shut off" access to US based IP addresses at 4.999m unique visitors (or whatever the limit ends up being) to stay in the "small service" exception window.


This is a good write up on the perils of messing with a fine-tuned mechanism that is Section 230 and potential devastating effects it could have on the internet as we know it. If not carefully drafted, any such bill will open up internet companies to many many lawsuits that can no longer be easily dismissed. The article goes into some examples where not getting the language quite right can blow the whole thing up.

I also found the following article a good starting point for learning about what Section 230 is and is not: https://www.techdirt.com/articles/20200531/23325444617/hello...


> The bill eliminates Section 230 for making a personalized recommendation of information that materially contributes to a physical or severe emotional injury.

IMHO that’s purposely worded to limit the bypass to people, not companies. That’s why the SV rep supports it. It’ll allow people with no money or chance of winning to sue tech companies and that’s it.

Paraphrasing someone that commented on HN once and I totally agree, services providing reverse chronological feeds should be considered platforms while services providing algorithmic feeds should be considered publishers.

There should be no qualifier. Anything that’s a personalized feed should be considered publishing and companies should be liable for promoting that content.


>services providing reverse chronological feeds should be considered platforms while services providing algorithmic feeds should be considered publishers

A reverse chronological feed is no more or less algorithmic than any other - any sort requires an algorithm. You're just criminalizing algorithmic complexity, which is insane. There's no reason the same n items in a list should expose a site owner to different degrees of legal liability because they're sorted one way versus another.

What about cases when the user can choose a different sort? Is a company liable based on whether or not a user decided they wanted to see content in anything other than strict chronological order? Is Twitter publishing or promoting content as soon as users tag it? Does sorting by karma make Hacker News a "publisher" of content? Do downvotes and flags count as "promotion" of the downvoted content, or all non-downvoted content?


> What about cases when the user can choose a different sort? Is a company liable based on whether or not a user decided they wanted to see content in anything other than strict chronological order?

No. If I’m asking for it and they give me what I asked for, that’s not publishing. If they pick some content that would normally be buried and jam it in my face because some ML algorithm thinks it’ll cause engagement that’s publishing.

> You're just criminalizing algorithmic complexity, which is insane.

I don’t think it’s insane. I think letting black box ML algorithms moderate public discourse is insane. I have no problem with that being turned into a liability for big tech.


what about spam filters?


The proposed bill would not make this distinction though. Both of these would fall under the “personalized recommendation” umbrella, and so would Gmail and a ton of other services.


The intention of 230 is to prevent law suit against agents of connectivity for the contents conveyed by such connectivity. More specifically, prevent use of content as a legal weapon to deny access.

The abuse of 230 is in what separates a content source from that which conveys such content. It’s not that drawing an objective separation is challenging it’s that nobody wants objectivity. For example some people don’t care that Facebook will be harmed, and perhaps they hope it will be harmed, but cry a river if it prevents equivalent access to cat videos on YouTube.

That subjectivity is the only defense potential targets of reform, beneficiaries of online advertising, have and will exploit it to death irrespective of what any such specific reform contains or intends. As such any proposed reformed is going to make people angry no matter how toxic Facebook is. The quantity and directness of evidence towards vile, harmful, and catastrophic behavior doesn’t matter.

As I see it there are only two paths forward for successful regulation and they will both make people very upset.


So in 5 years after they shoot the corpse of Facebook several times whilst spraying it's cohorts with gunfire shall we all install micro facebook serving boxes on our desks and plug them into our routers or cell towers as long as the party writing the software operates nothing or do they get a bullet too?


Wasn’t Trump all about section 230 reform, and everyone was angry about that? And now there’s positive discussions around this concept. What changed?


No, not everyone was angry about it. Biden, for example, said in January 2020 that it should be "revoked immediately"[1]. There were also other proposals backed by some democrats to limit it, like the EARN IT Act.

https://www.theverge.com/2020/1/17/21070403/joe-biden-presid...


I stopped reading after this line: “You can imagine my shock when I learned that this bill was the Section 230 reform bill that the Democratic leaders on the House Energy & Commerce Committee chose to back…”

It blows my mind that after all this time that people still operate under the assumption that one political party’s representatives are somehow less stupid than the other party and are actually surprised at the realization that their preferred party representatives don’t meet their expectations. Or, which is also likely in this case, don’t actually read the bills that they vote on and support.


I took it to mean the author expected the bill to be backed by some no-consequence fringe representatives, not serious politicians with real clout.


There Is No Bottom when it comes to sensational, alarmist blog posts. I've stopped reading them.

Especially beware of those that agree with your suspicions or biases - you are the prey.


If you had read it you might have derived value from the article which is simply factual. It's the reality of expecting senile old men born before widespread computers became the norm that is absurd not this piece.


It is not at all simply factual! It is filled with hyperbole and rants.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: