Hacker News new | past | comments | ask | show | jobs | submit login
U.S. Senators propose limiting liability shield for social media platforms (reuters.com)
66 points by pseudolus on Feb 5, 2021 | hide | past | favorite | 116 comments



When I was young, I realized that eventually the problems of the real world would catch up to the internet. It was in the mid-2000s that I realized a well-staffed company, with people working on problems for 8+ hours a day, for years, could always outpace individuals. Just as has been true with all human affairs: better-resourced groups can overpower lesser-resourced groups.

It has been true for years now that the problems of the real world have spilled onto the internet. In its infancy, the internet was an "island of sanity," a (socially and intellectually) secure enclave from the real world.

In the 2010s, (and particularly the late 2010s) this problem accelerated, and now rather than merely bringing the problems of the real world to the net, the net is home to brand new problems which would have been impossible without the net.

It's no surprise that government wants to step in. It should be telling that both the left and right have plans for the government to regulate the internet. (and at least in my estimation, both of these plans are not well founded: either eliminate section 230, or enforce some kind of "fairness," or simply do more for censorship)

I honestly believe this is mostly a losing battle. Companies and the government, (and really people) will not have any reason to become less invested in the internet, and so the interests of individuals will be steamrolled. It's a totally unavoidable problem for most people, although a tech-savvy elite can avoid a lot of these problems.


I had this realization later than you.

What finally clicked for me is that power dynamics form a positive feedback loop that naturally compounds inequality. If you have a little more power, the natural thing do with that power is to coerce/force/encourage/persuade others to give you a little more. The longer you let that system iterate, the greater the power imbalance. This is why the majority of human history has had massive inequality.

Every now and then, an event happens that goes against that. For example, WWII destroyed massive amounts of physical infrastructure, so it was a force towards equality. Those with more to lose did lose more. (Heart-breakingly, the pandemic is the exact opposite, since COVID-19 doesn't harm material goods, just people.)

One kind of event is a rapid technological change. These work sort of like a rainstorm for power. All of a sudden, bits of empowerment from the new technology rain down semi-randomly onto humanity. Anyone with a cup to catch it gets some. The early Internet was like this.

But, eventually, the system iterates like it always does and eventually those with a bit more power use it to build the rain catchment systems and gutters to route that new power over to themselves. Sometimes, this gets routed to new powerful people who were the ones who caught a lot of the early rain. But the technology itself ceases to be a force for equality. This is where we are today with the Internet.


Our American founders, who are so venerated as the architects of the country's acceptance of free speech, were not too keen on an unfettered discourse of the masses. Coming from a time when the public’s reach was limited by expensive distribution methods and a lack of literacy, the assumption was that this free speech they proposed would be mostly limited to learned men like themselves. This similar assumption was initially made for the internet, who’s culture was primarily defined by tech savvy, college graduates. The internet was an "island of sanity" because it was limited to a relatively small group of likeminded people choosing to use it. What is happening now is the success of democracy, not its failure. For all its fanfare within our culture—very few countries, organizations, institutions have supported true democracy and its unlikely they’ll start now.


I almost agree with the first two sentences, with this modification:

> American founders ... were not too keen on an unfettered discourse of the masses

I expect the founders' thinking about information sharing was largely based on their circumstances (printing presses, traditional mail service, town announcements, etc). Perhaps most founders considered them as immutable realities. Did any think of them as movable constraints?

Did any think about the downstream economic and political consequences of lowering the cost of communication? I don't know off the top of my head. Have you seen any evidence of this?


> In its infancy, the internet was an "island of sanity," a (socially and intellectually) secure enclave from the real world.

Just find another place on the internet with a barrier to entry that hasn’t been eliminated yet.

Back to Gopher and IRC I guess.


> Back to Gopher and IRC I guess

Yes, and happily so, for me at least. I use HN via RSS(except when making the odd comment), YouTube-DL via terminal, and the mainstream internet for ecommerce.



Ironically, this law would give tech companies vastly more power to censor and police online speech. As soon as tech companies are made legally liable for any speech that occurs on their platform, they would be both legally and morally justified in using draconian measures to preemptively block activities that could expose them to liability. For example, banning all controversial hashtags, automatically banning anyone who promotes controversial hashtags, or deleting tweets/individuals that their algorithms have flagged as being problematic.

In all of these cases, Twitter and Facebook would argue that they don't have the manpower to quickly and thoroughly review every single interaction that occurs on their platform. And they don't have the luxury of being permissive in allowing free speech. And they would be right.

If you're afraid of online free speech being aggressively policed by tech companies, their customer-service reps, and their automated algorithms, then such proposals should absolutely scare you. You could very well see a 21st century where Mark Zuck becomes the next Rupert Murdoch.


This is absolutely the truth. I've talked with attorneys about what this would mean for my side hustle (podcast hosting) and their advice is to have tools for doing exhaustive moderation ready to go. The risk of being sued for user content is large enough to be business-ending for startups of my size.

I sincerely hope that this is just political theater that never manifests into law. Section 230 is a blessing to the modern internet which goes wildly unappreciated, and is terribly misunderstood by the folks who benefit from it most.


Lawyers tend to be risk averse, and this is for good reason. A good lawyer will probably help you think about negative consequences.

In my opinion, a great lawyer will additionally discuss risks and the costs of mitigating them, particularly given a limited budget.


Without 230, that's probably correct. But with the specific changes being described, I'm having a hard time understanding how it would affect a podcast hosting service. Can you elaborate?


In the same way as any other publishing platform. Immunity under some subjective (or poorly defined) terms is as good as no immunity at all. How can I know what will lead to wrongful death? I would need to police every piece of uploaded content for anything that I feel could reasonably lead to that.


No, this may be your prediction of what may happen. Please don't abuse the term "absolutely", particularly with an uncertain prediction.


I don't think this is ironic. It seems to me the intention is to encourage tech companies to be more selective in what they allow.

I do not agree with your claim that "they would be both legally and morally justified in using draconian measures". Here's why:

1. Morality is different from legality, especially in this context

2. Draconian means "excessively harsh and severe". I don't think the bill if passed would provide protections for companies that are "excessively harsh or severe". These companies still operate in a broader context that includes market forces, public perception, and internal leadership in addition to regulation.


>The bill would make it clear that Section 230, [...] does not apply to ads or other paid content

seems reasonable

>does not impair the enforcement of civil rights laws

what types of civil rights abuses were protected by section 230? I'm under the impression that saying/posting racial slurs isn't illegal.

> and does not bar wrongful-death actions.

what's the current bar for suing an establishment for wrongful death? if a bunch of radicals met at a bar and plotted to murder someone, would the bar be responsible if they didn't intervene?


> what's the current bar for suing an establishment for wrongful death? if a bunch of radicals met at a bar and plotted to murder someone, would the bar be responsible if they didn't intervene?

If the bar had a of method of grouping people together based on their interests, then did nothing about violent speech in the bar while promoting other violent interests and showing ads promoting body armour, then yes, the bar should be held responsible for acts of violence that come out of meetings in the bar.

By the way, Facebook is nothing like a bar.


I think a more apt metaphor would be, If a bunch of radicals meet at a bar, repeatedly and and loudly state their intention to murder someone, loud enough for all the employees to know, and they also get into bar fights with the people they talk about murdering...would the bar owner & employees be responsible for not intervening? For not reporting them to the police and banning them from the establishment?

The answer is yes. Yes, they would be liable.


>The answer is yes. Yes, they would be liable.

under what legal theory? A quick search says no. https://law.stackexchange.com/questions/3671/at-what-point-d...


But if they rented private karaoke rooms and you can't hear or see what they're talking about, just that they rent a room regularly, then you wouldn't be eh?


GP’s argument is hinged on the premise that websites can “hear” everything posted. Just because a human doesn’t see it doesn’t mean the system didn’t. They already scan all posts for their targeted ad system.

Whether that’s a valid argument or not is up to you.


would the bar be responsible if they didn't intervene?

What state do you live in?

It's even worse for tech companies, because of the way people use these services.

As a completely hypothetical example that I'm in no way condoning, let's say someone in a forum posted a comment like, "I hate n-words. We should off them all." The comments in the forum degenerate from there, and include posts of the form, "YY black church starts service at 10:30 on Sunday." and, " Oh yeah, you mean the N-word church at XXXX South Whatever Blvd in Charleston SC?"

I think you see where I'm going with this. Taken separately, each of those comments is either non-specific, or not threats at all. But taken together, it's a different story.

Now, imagine this hypothetical tech firm ran ads on their forums. It wouldn't even matter whether or not they ran ads on the forum in question, because the very fact that they decide where to run ads would make them a publisher and ineligible for section 230 protections.


the "meet at a bar to plot a murder" metaphor isn't at all how the the tech companies faciliate these things. Its more like if the group meets at a bar, and exclusively communicated through with each other through the bartender. In that case, would the bartender have an obligation to tell law enforcement that a group at the bar was planning a murder? I think so.


I think "communicating through the barkeeper" makes the barkeeper's involvement sound much more intentional than it actually would be. It's more like if a group regularly gathered at the bar, using the barkeeper's space and supplies, and planned a murder.

The barkeeper might have a responsibility to listen in to the conversation and report if they are planning a crime, and the group is using the barkeeper's space to organize, but it's not like the barkeeper ever agreed for that specific group to use their space.


But the platforms have perfect knowledge of the text content of these conversations. It's not unreasonable to say they have near-perfect knowledge of the graphic and A/V content also - that stuff is easily automatically parsed at this point and they're likely already doing that parsing. Hell it's part of their business model for increasing the value of their ads.


> But the platforms have perfect knowledge of the text content of these conversations.

"[P]erfect knowledge"? They have knowledge of the text. You can even say they have perfectly accurate knowledge of the text. Do they have perfect comprehension and understanding of the text?

They have some comprehension and understanding in order to direct ads. But it is not perfect comprehension and understanding. And unless you're content with your apparently violent talk that's actually about a guild raid in WoW getting blasted by an AI, they need humans to ultimately decide what to do about content.


That example also fails. Without a very high degree of oversight of content, Twitter, Facebook, et al. are often unaware of what's actually posted. Not in the sense that it's unknowable, but that the technical systems facilitating the communication are just that, technical systems. They are not people. A go-between who is both aware of the messages and capable of comprehending them (sent in the clear) can easily be argued to be part of the conspiracy. But a go-between who is incapable of comprehending the messages has a valid defense (especially if the vast majority of the things they sit between are legal communications and not criminal conspiracies). Twitter et al. are capable of knowing, but due to the scope of their systems the communication often has to be obvious or brought to their attention for them to become aware of it in order to to react appropriately.


>what types of civil rights abuses were protected by section 230? I'm under the impression that saying/posting racial slurs isn't illegal.

Many people seem to believe that moderating platforms and banning accounts or otherwise enforcing terms of service for any content or behavior which isn't strictly illegal is a violation of their civil rights, and that this abuse is enabled by Section 230. They believe that any alteration of user submitted content makes the "platform" no longer a "platform" but a "publisher" and that they should be legally liable for that content.


Im worried this would codify fine power for whatever way the political wind blows, which seems to be incremented by increasing the size of payroll particularly at the FTC.

Here is an idea for better oversight of big tech that does not involve more regulation: a deacentralized moderation system that replicates the us court system.

If something is flagged, then a jury made up of randomly selected users decide if the comment is inciting violence / etc.

If 1 user says it does not , then there is no ban/deletion etc. You can iterate many times off that into a system that is scalable, cost effective, descentralized, transparent, and just.


Aren't you describing Parler's failed attempt at pretending they moderate their content?


By definition, if they pretended, it means they were not serious about getting to a solution that the average person would find reasonable

Note that it doesnt mean aws or gcp or cnn had to agree, it just means that it really resembled the process that we have for the us court system

In fact something well implemented would potentially threaten (and by definition, by attacked by) the twitters fb, aws , etc, because it would become a more reliable predictor of content moderation than what currently exists via ToS. It would effectively obsolte catchall ToS, and decidely be a vote against centralized control

We can likely agree that the words "well implemented" and "parler" do not belong in the same sentence


> I'm worried this would codify fine power for whatever way the political wind blows

It depends on how the legislation is written, which will determine who makes various decisions and how the decisions are made.

Can you tell me about examples of legislation that you think have been done well? And what is your idea of 'well done'? Should I interpret your comment as meaning that you don't like arbitrary changes in political opinion -- that you think legislation should have deeper standards or principles? In the case of social media and speech, what do you think the standards should be?

Can you also give an example of legislation that has not been done well.


>>> It depends on how the legislation is written, which will determine who makes various decisions and how the decisions are made.

--- Given the regulation itself this is not a bipartisan effort (this is different from support bipartisan)

>>> Can you tell me about examples of legislation that you think have been done well

---I think the constitution is a sound example. Only the takings clause seems to have some pushback . Another hugely succesful act is U.S. Airline Deregulation Act. This is the only relatively recent act of congress that eliminated an entire US Agency (CAB), which itself makes it a rarity among the rare successes. Sherman act is one that everyone seems to come back to (and generally seems to be underpowered vs abused), the Voting Rights Act of 1965, Internet Tax Freedom Act, Personal Responsibility and Work Opportunity Reconciliation Act. Most acts restricting the power of the federal government tend to enjoy broad support over time.

>>>Can you also give an example of legislation that has not been done well.

---No child left behind ACT, Digital Millennium Copyright Act, Patriot Act, etc.

You can tie together almost all bad laws passed when they are (a) a reaction to specific events and circumstances (b) not bipartisan (c) usually sponsored by a very specific group. Such laws tend to not pass the test of time.

The reverse is also true, of course.


Your comment implicitly defines legislative success in terms of long-running popularity.

What is the intended purpose of such a definition? Is it meant to be explanatory? Predictive? Normative? Have you thought about the limitations of such a definition along these three categories?

Is this definition an attempt to find a neutral definition of legislative success? Or at least an attempt to find a definition that can be easily measured?


> Given the regulation itself this is not a bipartisan effort (this is different from support bipartisan)

Would you like to finish the sentence? It is incomplete.


> You can tie together almost all bad laws passed when they are (a) ... (b) ... (c) ...

Can you provide detailed support for such a claim?


Section 230 reform feels to me like gnawing around the edges of the problem. I've started thinking that we should consider going nuclear on online platforms with some version of "social networking platforms funded substantially by advertising revenue are prohibited". Something that will restrict, as powerfully as we can, the existence of platforms incentivized to solely maximize user engagement. Congress can strip them of their liability shields. Facebook can set up all the moderators and review boards it wants. In the end, we all know how their bread is buttered.


For the sake of argument, assume I agree with you.

Next, would you be willing to talk about the complexities of gathering places, free speech, and who funds those gathering places? Would you be able to formulate one, two, or three basic theories (or models) of how these factors relate? Could you test them?

The reason I ask is this: I think there are some recurring patterns that connect these factors. I won't claim these patterns are universal, but I would expect to find useful patterns across many regions and many periods of time.

Having a tested, applicable model of how these are connected can help find the leverage points to target with regulation to achieve particular goals.

Caveat: what I've stated above is, more or less, a compact statement of how I think public policy should be made, at least for the most significant issues that we face.

And, if an issue is not deemed significant enough to warrant this kind of analysis, that's understandable. However, I don't think anyone should fool themselves into thinking that some 'obvious' action will also be verifiable *solution*. It might happen to seem effective to some people, but that doesn't make it an effective policy or decision -- very few of us are provably good at separating association from cause and effect.


I'm generally fairly hesitant about restricting 230 protection, but removing it for paid content, such as ads, makes a lot of sense to me.


The failure though is the government isn't accounting for content next to ads. Ad companies are likely to keep user generated content that drives good ad revenue, even if it's also harmful.

This is a start, but the solution is to revoke 230 entirely.


Are you familiar with the case that originally prompted Section 230? It would be a pretty serious problem if hedge funds could sue your ISP for letting you say bad things about them. (In fact, without Section 230, ad companies could sue YCombinator for letting you make this very comment.)


I am familiar, and Section 230 was not the answer. And yes, normally anyone can sue anyone, I'm not sure why tech companies deserve to be magically immune to this.


Is there a better answer you have in mind? I don't think it's a question of tech companies deserving or not deserving immunity - the question is how we can avoid a locked-down sphere of Internet discourse where it's impossible for the average person to say bad things about powerful organizations.


I guess theoretically, companies would either have unmoderated user content, or completely sanitized user content that goes under human review. Little to nothing inbetween.

A big grey area seems like websites that give users tools to moderate content and let them handle it. Not sure what the courts would think of that.


Presumably this would lead to this site shutting down.

Do you think that's an acceptable trade-off?


This. If you make a website liable for what someone posts, the sites will be much more heavy handed in their moderation approach and start taking down content with even a hint of a problem.

Just look at YouTube with kids. They’ve taken such a heavy handed approach to content moderation regarding kids content simply because of laws like COPPA. But then videos not involving kids get swept up into the mess because the algorithm decided their video is about kids. Even simple 10 second clips of Spongebob get flagged because of this.


First, your presumption is completely bull----. HN isn't ad supported. And every other class of business has normal liability, tech companies are the only business that claims to need magical immunity to lawsuits, and for some reason, people believe them.

Second, if websites shutting down is the tradeoff for harmful websites being able to be sued, yes, absolutely that's acceptable. Websites that propagate mass harm and disinformation need not exist, and their replacements will be better.


If section 230 disappears, it will likely mean HN will also disappear.

Also HN is high end marketing as an input for Y Combinator's founder program, if there was no business of Y Combinator funding it, it would not exist.


On the contrary, I expect the death of harmful ad companies like Google and Facebook would cause an explosion of growth in the Internet startup space.


I doubt the US govt will break their top surveillance partners and 3/5 companies responsible for most of their GDP growth for the past decade in any meaningful way. After all of this saber rattling, they will still stand, with probably stronger positions because all the new compliance cost will shut out new entrants.


The US government's priorities changed when it became clear those companies could tamper with their election chances. All other concerns immediately became secondary.

> all the new compliance cost

We don't need new compliance costs, we just need the DOJ and FTC to enforce our existing laws and break Google and Facebook up. They're already in violation of the law.


I think section 230 should be modified to explicitly provide protections only for platforms that ban all moderation that isn't related to security or the most heinous of illegal content such as CSAM. Any platform that bans for ideological reasons should not be shielded from liability when they get it wrong.

I think it is also important to advocate for decentralized, federated communication channels. Increase the total number of possible points of failure, and they absorb the load in a more graceful manner.


I think it's fair to sue to take stuff down. Although you should be able to sue to put stuff back up too


WashPo has a more detailed article: https://www.washingtonpost.com/politics/2021/02/04/technolog...

Correction as of 12:57 pm EST: Based on a comment below, I see now that the link above is talking about a different bill from Senator Klobuchar with an anti-trust focus. She's been busy.


I don't think this is the same bill.


Thanks for the correction. Here is the correct bill:

https://www.warner.senate.gov/public/_cache/files/4/f/4fa9c9...



The bill (https://www.warner.senate.gov/public/_cache/files/4/f/4fa9c9...) would exclude not only the things listed in the source article but also any kind of injuctive relief. Doesn't this end up being basically a full repeal? I'm sure Stratton Oakmont would have been satisfied with an injunction requiring Prodigy to take down the critical messages on their bulletin board.


I used to believe that 230 is a good thing in all cases. But I have absolutely no sympathy for the people who post others addresses or family members information with the intent of bringing harm. That is one case I think it goes way too far, and social media platforms should absolutely be held liable for that type of harassment which can cross into the real world. This bill seems like a good step in that direction.

And I do think platforms should be able to get sued and held liable for targeted violence against individuals that is incited on their platforms. The reuters article is light on information, but heres something that goes into a bit more detail.

https://royalexaminer.com/warner-announces-the-safe-tech-act...


If we’re going to adjust 230, it’s a perfect chance to favor smaller platforms. e.g. if you are under a certain size, you get to keep full 230 protections. The size and power of these companies needs to be addressed.


So all I need to do is have a bot register n+1 accounts, then I can legally force them to publish whatever I want them to or else they have to go under to avoid legal liability? Neat.


My single sentence obviously doesn’t capture the nuance that would be put into a law. And there are many ways to measure the size of a company.

Also, removing limiting liability doesn’t force companies to post things. That’s just 100% not what that is.


Bill Clinton said in 2000 that limiting Internet is like nailing jello to the wall. Now, not only China managed to do so, even the US starts trying to nail the jello as well.


Is China really limiting Internet though?


Is not 100%, but enough to nourish homegrown player. As we have seen from the tech war and Twitter mass bans, having a homegrown viable ecosystem is very important.


The first part of the bill seems reasonable...

This part

>Under the SAFE TECH Act, the word "information" would be swapped out for the word "speech," narrowing the law and potentially erasing liability protections for a range of other illicit information-sharing that happens on online platforms.

I worry about. Who gets to decide what's classed as speech or not? What's the definition of 'speech'?

Will it end up being left to a judge to determine on a case by case basis?



Too many people do not understand the First Amendment (1A).

In case you have not reviewed the 1A recently, please read this carefully: https://www.law.cornell.edu/wex/first_amendment

> The First Amendment of the United States Constitution protects the right to freedom of religion and freedom of expression from government interference. It prohibits any laws that establish a national religion, impede the free exercise of religion, abridge the freedom of speech, infringe upon the freedom of the press, interfere with the right to peaceably assemble, or prohibit citizens from petitioning for a governmental redress of grievances. It was adopted into the Bill of Rights in 1791. The Supreme Court interprets the extent of the protection afforded to these rights. The First Amendment has been interpreted by the Court as applying to the entire federal government even though it is only expressly applicable to Congress. Furthermore, the Court has interpreted the Due Process Clause of the Fourteenth Amendment as protecting the rights in the First Amendment from interference by state governments.

Please pay particular attention to whom and what 1A applies.


Who do you think doesn’t understand 1A?

I’m not sure if you mean that the senators don’t understand. Or social media companies. Or us?

Government (in the US) regulates commercial speech all the time. Try broadcasting without an FCC license.

Government also regulated privately owned public places. Try banning people of a particular race from your shopping mall and see what happens. Shopping mall operators aren’t free to absolutely limit speech however they like, even though private.

I think it’s reasonable that government could treat social media companies as privately owned public spaces and add new regulations or requirements. I don’t think this would mean a constitutional amendment to revise 1A, but using the free speech principle for these regulations makes sense.

Of course, other countries have free speech laws and customs that have nothing to do with 1A.


> Government (in the US) regulates commercial speech all the time. Try broadcasting without an FCC license.

Yes, I understand this.

> Government also regulated privately owned public places. Try banning people of a particular race from your shopping mall and see what happens. ...

So far, this is not an issue of speech.

> ... Shopping mall operators aren’t free to absolutely limit speech however they like, even though private.

Ok, I see where you were going with it... I'm not a legal expert, much less in this area, but after skimming over [1] and [2] this appears to be a complex area.

[1]: https://en.wikipedia.org/wiki/Pruneyard_Shopping_Center_v._R...

[2]: https://www.ccim.com/cire-magazine/articles/states-speak-out...


> I think it’s reasonable that government could treat social media companies as privately owned public spaces and add new regulations or requirements.

Yes, this kind of argument seems to have a lot of traction right now, and I think there is precedent to build on, though -- again -- I'm far from an expert on it.

> I don’t think this would mean a constitutional amendment to revise 1A...

Agreed.

> ... but using the free speech principle for these regulations makes sense.

What do you mean by the "free speech principle"? I think 100 different people would probably have _at least_ 10 significantly different ideas of how free speech should work.


> What do you mean by the "free speech principle”

Good point, sorry, I said this like it was some simple concept that everyone understands in the same way.

I mean it in the terms of what the UN calls freedom of expression and I think is based on the Ancient Greek origin. [0]

That’s not absolute and there’s still debate, but, in my mind, it’s a philosophy that more communication on more topics is ultimately better for improving society and doing great things than less. An application that I think is relevant to this definition is Postel’s Law [1] to be liberal in what’s accepted and conservative in what’s said.

And I think part of this is to not try to have a rules based approach of specific banned topics because it ends up being circular, eternal, and nonproductive. So any efforts to get specific rules around this seem to harm groups.

[0] https://en.wikipedia.org/wiki/Freedom_of_speech [1] https://devopedia.org/postel-s-law


In my experience with HN, many people do not understand even the basics of 1A. Like many people, they have fundamental misconceptions, and they repeat them over and over.

To take just one example on this particular comment page:, I wonder if the person writing this comment understands 1A: "While this looks good on paper, it would severely hinder free speech on these platforms if not already."


That comment makes no mention of the first amendment.


The above comment is obviously true, but how much does it advance the conversation? Not much in my opinion, but I'll try to build on it. I'll try to unpack what I think you were getting at.

While it is obvious that the comment doesn't say "First Amendment", that is not the end of the matter. When people talk, it is important to think about what they mean in a particular context. That comment very much relates to 1A, whether it uses a particular phrase or not, because it exists in this context of discussion about a bill talking about government regulation, which very much ties into 1A.

Let me circle back. My experience here on HN after trying to have substantive 1A discussions is, frankly, disappointing. Perhaps I thought that people smart enough to program computers would be somewhat good at unpacking logical arguments based on a complex legal history. I've found mixed evidence for this. Instead, I often see:

* Many people are simply unskilled communicators.

* Many people are unnecessarily pedantic -- meaning they make a tiny irrelevant point but miss the huge issues in play and don't connect with others as people

* Many people lack the tools or willingness to clarify, learn, and understand.

* In terms of subject matter, the basics of 1A are not well understood. Instead, dogmatic assertions tend to be used.

* FUD and slippery slope arguments are prevalent.

* Balanced discussions of pros and cons are few and far between.

* Even if the basics of 1A are understood, the authors often do not do a good job of formulating their questions in a way that signals their understanding and clarifies their question.

These are unfortunate patterns I've seen. This is why I shared some basic information to ground the conversation.

You may call me a critic, skeptic, or cynic. However, I think I'm being fairly accurate as to the quality of intellectual discussion around 1A on Hacker News.

So much of the 'legal' conversation I see here on HN would be laughed out of even an undergraduate law class. We can and should do better. It is not for lack of intellectual reasoning ability. Other things seem to be getting in the way: dogma, fixation on narrow things to the detriment of the broader context, ego, unexamined rigid beliefs, a lack of historical awareness, insular worldview, etc.


This isn't a law class, so let's not apply those standards. The 1st amendment itself refers to "freedom of speech" so there is absolutely a principle that exists outside of the 1st amendment. There is absolutely an argument to be made that the scope of the 1st is too narrow to protect freedom of speech in the US.


> The 1st amendment itself refers to "freedom of speech" so there is absolutely a principle that exists outside of the 1st amendment.

I wouldn't say "absolutely" [1], but yes. At the risk of being too direct: I think we both know this is obvious. Do you have a finer point to make?

> There is absolutely an argument to be made that the scope of the 1st is too narrow to protect freedom of speech in the US.

I wouldn't say "absolutely" [1], but I tend to agree. This does raise a follow-up question: To what degree do you think free speech should be protected in the US? How do you balance it against other principles?

All of this said, I still stand by my commentary above, including (a) the unimpressive quality of argumentation and communication on HN; (b) a lack of understanding of how 1A and free speech interrelate; and (c) people here are not hopeless; they have the intellectual ability to do better -- if they only put a bit more effort into what they put out into the world.

[1] Are you aware of the pitfalls of using "absolutely"? I recommend reading https://www.dailywritingtips.com/absolutely/


> This isn't a law class, so let's not apply those standards.

You are missing my point. If you re-read my comment carefully, you will see that I explained in some detail why I'm often disappointed in the quality of discussion here.

Only one part of my comments pertained to an undergraduate law class. I mentioned it not because I expect specialized knowledge of the law, but because I expect a certain amount of preparation before making strong claims. If you make a claim in a law class, you must be ready to support it. You are expected to know the history, arguments, and connection to the topic at hand.

I think many software developers can relate to the need to explain their claims in a work setting. For example, if a developer suggests that an organization switch from technology X to Y, they should expect to be asked why. So, it is not very different from the expectations of a undergraduate law school class.


I don’t know, usually when I talk and think about freedom of speech, I don’t mean 1A. I mean the principle of free speech that is the idea parent to 1A.

I’ll frequently try to think about freedom of speech when trying to work out some community process or way to interact with people and if someone thought I was talking about 1A, I would feel bad for their confusion.

I’m not sure how to prevent this in my speech as if I said “free speech* *independent and not specifically 1A” every time it would probably be more confusing.


Thanks for clarifying what you mean. I get it.


Your quote makes it clear that "to whom and what 1A applies" is up for interpretation. While the wording of 1A specifically only prevents congress from passing laws that restrict freedom of speech, it has been extended by the courts to cover the entire federal government, and state governments in order to protect freedom of speech. There is no reason that in the future, 1A couldn't be extended further.


[flagged]


Considering they are the highest court in the USA, I would say their opinion is most relevant. That doesn't mean they can't be wrong morally or ethically, but their opinion is definitely relevant.


Their opinion is relevant as long as it is enforced, whether or not you personally like it.


While this looks good on paper, it would severely hinder free speech on these platforms if not already.


Restricting advertising that is deemed harmful is not unprecedented, for instance tobacco ads in much of the world or prescription drug ads in sane countries. This doesn't even restrict any new kind of ad, it just opens these platforms up to liability for the ads they run.

Seems pretty reasonable to me. It means if my system was exploited via an ad on Facebook[0], I can sue Facebook for showing me that ad.

[0] Situation is hypothetical, I don't actually use Facebook.


When you say "free speech", are you referring to the First Amendment (1A)? If so, are you claiming that the bill would conflict with 1A? If so, on what basis?


Not the OP, but I believe the freedom of speech is a concept that exists outside of the Constitution. The OP may be referring to a principle or ideal instead of a constitutional construction.


Obviously how a social media platform moderates itself has virtually nothing to do with 1A. Limiting or eliminating section 230 would place more legal liability on the platforms over users' content. They would have a much greater incentive to limit users speech whenever there is the slightest risk of putting the platform in legal jeopardy. The fact that social media platforms aren't considered "publishers" means that the financial benefit for having a quasi free-speech community overrides their legal risk.


Just break them up, this type of legislation is going to harm everyone.


I think we should do the reverse: No moderation outside of court orders or you risk losing 230 protections.


What do you think would happen to Hacker News if that was the law?


> I think we should do the reverse: No moderation outside of court orders or you risk losing 230 protections.

I hope you like spam.

Also, what you propose is equivalent to forcing you to pay thousands of dollars and wait weeks or months to take down an obnoxious sign that I put up in your yard. How workable does that sound to you?


That's a great idea, if your goal is to impel every website to either get rid of its forums or devolve into 4chan.


Even 4chan would be veeeeery different without moderation.


I'd imagine Hacker News would be a much less pleasant place if dang was outlawed.


Then you may as well repeal it because Section 230 is redundant if you don't moderate content.


No spam filtering? Presumably no DDoS protection either? Sounds exciting!


Good. It is bizarre to me that one could post literal death threats online endlessly without fear of any consquences, yet if you were to yell the same threats in a public location, the police would be summoned. My main concern would be that any regulations that punish companies for allowing threatening material on their site punish in proportion to the audience who was exposed to it. Big fines for big tech, tiny fines for a startup.


Death threats are just as valid online (or at least are in many places) as they are in the physical world. There is a difference in terms of whether the threat is genuine or not; if one anonymous person threatens another anonymous person - can it really be believed that the threat is credible despite its obviously impractical nature?

Just like in the real world, the venue is not responsible. Walmart is not responsible if someone shouts death threats inside. Walmart or other users/customers can call the police but it's not Walmart's job to tackle the guy. The really only difference is that local & state law enforcement is less inclined to enforce the law against online individuals given the time & energy required in addition the the threat being significantly less likely to be credible.


> It is bizarre to me that one could post literal death threats online endlessly without fear of any consquences

Maybe you phrased it wrong, but the user cannot post death threats online without fear of consequences, it's just not the platform who the consequences fall upon.

A better analogy to a public location would be holding the city liable for not removing a death threat someone affixed into some city-owned monument.


I should point out that we didn't tolerate this level of liability protection when Hollywood was involved. The safe harbor for copyright infringement has a notice-and-takedown requirement that CDA 230 doesn't have. If someone publishes my home address and an incitement to violence on a web forum, the platform doesn't have to do anything in order to maintain their safe harbor. However, if that same person were to copy a photo I took of my house in order to demonstrate where to go, then they have to accept a DMCA 512 takedown request or I can sue them for damages. (Assuming I jumped through all the necessary copyright hoops in order to do so.)


Death threats posted online are just as illegal as if they are yelled in public. Police (FBI) investigate and prosecute lots of death threats and there’s lots of examples of people who thought they were anonymous being found and prosecuted for criminal threat. [0]

So I suppose it’s regulated in that it’s criminal and criminal actions are prohibited.

I’m not sure how you would fine Twitter because one of its users made a death threat, not how you would fine a McDonald’s because someone in their restaurant yelled a death threat. Or a megaphone manufacturer, etc etc.

[0] https://www.justice.gov/usao-wdnc/pr/federal-jury-convicts-b...


That threat was issued against a political candidate, which raised it to the FBI's attention. The vast majority of violent threats receive zero attention.

So it's technically correct to say that it's "just as illegal", but that's not the best kind of correct. While I'd be very interested in a directive that the FBI should take all violent threats online seriously, they don't have anywhere near that kind of resources. And I have a feeling that if they did start making numerous requests to unmask anonymous users, a lot of HN readers would oppose that as well.


There are many threats from normal people to normal people that result in conviction. [0] I didn’t intend for my example to be representative for all.

If you report a death threat to the police and fbi it will be investigated. I think the problem is they have to be substantiated. If I just run around the town square saying “I’m going to kill you” that’s not actually a criminal death threat. So someone saying “#diaf” is not a death threat, it’s obnoxious but not a crime.

Fortunately murder rate is not correlated to internet death threats or we’d all be dead.

This doesn’t mean that platforms shouldn’t stop boorish behavior, but it’s not a crime and we probably don’t want the FBI investigating stuff that’s not a crime.

But when it is, when I think someone is going to kill me, I feel confident that I can call the law and they’ll do what they can.

[0] https://elkodaily.com/news/local/man-arrested-for-online-dea...


>Good. It is bizarre to me that one could post literal death threats online endlessly without fear of any consquences, yet if you were to yell the same threats in a public location, the police would be summoned.

That's only because the former is done anonymously and the latter can be easily be attributed to you, right? If a kid sent death threats to his classmates using his real-life facebook account, the police would be summoned too.


The liability should reside solely with the person making the death threats, not with the communication tool they use to make the threats IMHO.


I mostly agree. But if the communication tool's owner knows or ought to know of the death threats, then perhaps they should bear some liability.

(If I were the communication tool, I'd add some term to my TOS that the user indemnifies me against losses due to speech the user made. So if I get sued because of something you said, I can come after you for the losses. Of course, this requires that you KYC . . . so anonymity might become harder in a post-S230 world. I sure wouldn't want to be someone running e.g. an anonymous far right or militia forum if I could be held liable for wrongful death or harassment claims.)


threat of harm is protected speech.

It has to be shown that there is both intent and the ability to carry it out before it starts entering the territory of illegal.

Which makes sense if you think about it. A 5 year old child threatening you with violence is not the same as a 25 year old man with a bat in their hand.

The reason threats of violence online are not illegal is because they have no ability to carry it out. This is actually a large reason why doxxing is such a big deal online. Suddenly, these people DO have the ability to carry out the threats.

If you REALLY want to go at it from that perspective, these companies leaking/selling PII should make them legally liable.


>Called the SAFE TECH Act, the bill would mark the latest effort to make social media companies like Alphabet Inc’s Google, Twitter Inc and Facebook Inc more accountable for “enabling cyber-stalking, targeted harassment, and discrimination on their platforms,” Senators Mark Warner, Mazie Hirono and Amy Klobuchar said in a statement.

Ehh.

Slippery slope theory, if you don't want to be subject to these behaviors don't use social media. While a good bar tender will cut off patrons who are clearly drunk, if you then hop in your truck and run into a tree, you can't sue the bar.


> if you then hop in your truck and run into a tree, you can't sue the bar

Actually I don't think that's entirely true. I'm pretty sure there are laws in some places that if the bar knows a person is drunk and knows that they will drive, and does not prevent it, then the bar or the bartender (or both) can be held liable if the person then gets into an accident.


CRIMINAL PENALTIES

The Liquor Control Act prohibits alcoholic liquor permittees or their employees from selling or delivering alcohol to intoxicated persons (CGS § 30-86(b)(1)).

Although the statute does not define "intoxicated persons," the Connecticut Supreme Court held in 1937 that someone can conclude that a defendant is intoxicated if he or she is staggering and not able to run very well (State v. Katz, 122 Conn. 439).

Violations are punishable by up to a $1,000 fine, up to one year imprisonment, or both, for each offense


Known as "Dram shop laws": https://en.wikipedia.org/wiki/Dram_shop


It's worth a google search before claiming this:

https://www.google.com/search?q=bars+held+responsible+for+dr...

https://www.google.com/search?q=dram+shop+laws

"A commercial establishment may be held legally responsible for over-serving a visibly intoxicated person, or for serving alcohol to a minor, when that individual causes death or injury after leaving the establishment and causing: A motor vehicle accident, A pedestrian accident, Assault or other physical altercations, Other events leading to someone else's injury or death"


You can be too drunk to drive without being visibly intoxicated.

Regardless the impetus is on the individual to take an Uber to the bar if they plan on drinking.


Re: "if you don't want to be subject to these behaviors don't use social media"

This specific argument is weak, _and_ the form of the argument is weak. I suggest you look for better argumentation.

Your argument suggests that not 'using' social media frees a person from all of the effects of social media. That is oversimplified and untrue. It does not address the connectedness of people and culture, for one.

I'll use two examples to explain.

1. Your argument takes the same form as the following argument: "if you don't want to be run over by a drunk driver, don't drive". Tell that to a pedestrian who was killed by a drunk driver.

2. Social Media is not like Vegas. What you say on Social Media does not stay on Social Media. It has lots of spillover effects.


> if you don't want to be subject to these behaviors don't use social media

It is nearly impossible to engage in modern democracy without engaging in social media. I'm not saying this isn't a slippery slope, but when the only way to participate in our economy and government is online... well, there's a general public good element that needs to be considered by society. I'm not a big fan of government regulation, but the socials clearly aren't going to do it fairly.


It is nearly impossible to engage in modern democracy without engaging in social media.

I think you're using too broad a definition of "nearly."

Aside from HN, and the occasional write-only rant on Twitter, I do not use social media. And have never used it in any way to keep in touch with my government.

And I'm not alone. There are millions of people in the United States and hundreds of millions of people around the world who do not use social media.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: