1. Incentivizing Online Platforms to Address Illicit Content
The first category of potential reforms is aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation.
a. Bad Samaritan Carve-Out. First, the Department proposes denying Section 230 immunity to truly bad actors. The title of Section 230’s immunity provision—“Protection for ‘Good Samaritan’ Blocking and Screening of Offensive Material”—makes clear that Section 230 immunity is meant to incentivize and protect responsible online platforms. It therefore makes little sense to immunize from civil liability an online platform that purposefully facilitates or solicits third-party content or activity that would violate federal criminal law.
b. Carve-Outs for Child Abuse, Terrorism, and Cyber-Stalking. Second, the Department proposes exempting from immunity specific categories of claims that address particularly egregious content, including (1) child exploitation and sexual abuse, (2) terrorism, and (3) cyber-stalking. These targeted carve-outs would halt the over-expansion of Section 230 immunity and enable victims to seek civil redress in causes of action far afield from the original purpose of the statute.
c. Case-Specific Carve-outs for Actual Knowledge or Court Judgments. Third, the Department supports reforms to make clear that Section 230 immunity does not apply in a specific case where a platform had actual knowledge or notice that the third party content at issue violated federal criminal law or where the platform was provided with a court judgment that content is unlawful in any respect.
2. Clarifying Federal Government Enforcement Capabilities to Address Unlawful Content
A second category reform would increase the ability of the government to protect citizens from harmful and illicit conduct. These reforms would make clear that the immunity provided by Section 230 does not apply to civil enforcement actions brought by the federal government. Civil enforcement by the federal government is an important complement to criminal prosecution.
3. Promoting Competition
A third reform proposal is to clarify that federal antitrust claims are not covered by Section 230 immunity. Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.
4. Promoting Open Discourse and Greater Transparency
A fourth category of potential reforms is intended to clarify the text and original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users.
a. Replace Vague Terminology in (c)(2). First, the Department supports replacing the vague catch-all “otherwise objectionable” language in Section 230(c)(2) with “unlawful” and “promotes terrorism.” This reform would focus the broad blanket immunity for content moderation decisions on the core objective of Section 230—to reduce online content harmful to children—while limiting a platform's ability to remove content arbitrarily or in ways inconsistent with its terms or service simply by deeming it “objectionable.”
b. Provide Definition of Good Faith. Second, the Department proposes adding a statutory definition of “good faith,” which would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation, unless such notice would impede law enforcement or risk imminent harm to others. Clarifying the meaning of "good faith" should encourage platforms to be more transparent and accountable to their users, rather than hide behind blanket Section 230 protections.
c. Explicitly Overrule Stratton Oakmont to Avoid Moderator’s Dilemma. Third, the Department proposes clarifying that a platform’s removal of content pursuant to Section 230(c)(2) or consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service.
I highly recommend looking at the redline. It's approachable, and doesn't fall into the interpretation biases of the reporter.
I agree with this, but don't forget that interpretation biases will still come into play as the law is enforced — the biases of police, lawyers and judges. So it still makes sense to read others' interpretations of what this might mean in practice.
I've seen this regarding serious legislation like this or even something as mundane as Apple's app store "guidelines".
I may be misinterpreting your wording here (my apologies if I am), but I'd assume the opposite. I'd think people who operate in a realm where text becomes action executed by a machine designed wholly around faithful, reliable execution of text fed to it would come to learn the reality-defining power of rules.
In this sense, law is similar to code, but far easier to exploit.
I would disagree. I feel like programmers see logical contradictions or loop holes in laws and think that if they make the argument in court, the court will segfault and they will go free.
In reality, courts use more inductive reasoning than computers, and aren't as easily tricked.
Have you ever litigated?
Litigation isn’t about “hacking” the court. If that’s how a lawyer is selling themselves, you’re being taken for a ride.
Cases are about resolving novel ambiguities in the law. The vast majority of disputes never make it to court. The two sides lawyer up and one of them is advised that based on the facts the precedent is in the opponent’s favor. As such, settlement is advisable. In a minority of cases, precedent is mixed or not applicable—the facts and circumstances are truly novel with respect to the law. Given the law is finite and reality is infinite, this happens more often than you’d think.
Lawyers thus argue how the law should be extended. Remember, case law is law in common law countries. Judges opinions aren’t interpretations per se, but acts of rule making.
I see way too few computer engineer criminal masterminds to accept this hypothesis at face value. ;)
Applying to be a lawyer in and of itself can be a process of figuring out the correct paperwork to fill out and asking for a special exemption on a piece of missing information or a missed deadline.
Ah, but don't forget how often the code that gets written doesn't do exactly what the writer expected! Or is exploited by another party...
"For show," to me, implies you can ignore it and charge forward, bull-in-a-china-shop-style. That doesn't work in law or computers; naive invalid input gets rejected by the first-stage parser, and a court complaint completely ignorant of the law can get tossed by the clerk before it even sees a judge's desk. Rather, hacking is understanding and exploiting the consequences of, and nuances within, the rules.
So, quite ironically, you are doing precisely what you are talking about when you take the words "hacker news" too literally.
I don't think rulebook following coders are particularly interested in debating like this on HN.
All it requires is for other humans to accept your interpretation. That depends both on the text, your position in society, and that of the other person. The text will convince a few people when the difference in power is small. When the difference in position is large, no text will support the person in the subordinate position.
Lawyers pretend that this is not the case, and that they have mastered objective interpretation of the rules. It is unclear whether they say that because they are stupid, or because they think you are stupid. As programmers, who really are trained in objective interpretation of rules, it's laughable either way.
I try to avoid the error of confusing my lack of knowledge of the grammar of a system with the system itself being stupid. Law and its application have flaws, but it's not an "anything goes" system as you seem to be describing it here.
From my social experience in high school, the board game rules nerds were more likely to become programmers than lawyers. The worst of them went on to be a pro poker player.
Cone of ignorance expands further, faster than the cone of understanding.
Unlearning is even harder than learning, which contributes the durability of ignorance.
"You'd think /hackers/ would understand that rules are for show."
Ya. I'd like someone to explain cognitive certainty. The opposite of "strong opinions, loosely held". How is everyone else so sure of what they know? The only certainty I have is knowing that I'm probably wrong.
When man found Truth, a worried demon went to the Devil who nonchalantly said; "Eh, I'll get them to institutionalize it".
This disallows the common practice of open-ended moderation criteria such as "Be kind. Don't be snarky". Proposed section (c)(1)(b) removes the safe harbor except for moderation criteria on the list in proposed (c)(2)(a), which is:
"obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, promoting self-harm, or unlawful"
> Provide Definition of Good Faith. Second, the Department proposes adding a statutory definition of "good faith" which would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation...
Open-ended moderation criteria such as "Be kind. Don't be snarky" are at risk of not being judged "particular" enough, which is a requirement of proposed section (g)(5)(a), meaning that sites with open-ended criteria could be judged as "not in good faith" according to this definition, and lose their safe harbor.
If you want to be deemed a public square type of space that is fine but you don't then get to impose arbitrary rules about what is said, that would mean it is not a public place and is in fact a private space being editorialized by your private rulings on what speech should be allowed.
This is a very good move and clarifies everything for everyone.
I don't see why this is a good move, other than opening up a bunch of sites to liability. And it's going to make the ToS even longer so that they can make it "particular". Who wins here exactly?
Section 5c is particularly problematic. It disallows shadow-bans if I'm reading correctly.
Section f9 is...suspicious.
Section d4 seems painful for small providers. You lose good samaritan status if there isn't a good way to contact you.
Section c2A is the important one. It basically means that a site cannot remove content unless it is objectively obscene or extreme. Section g5A suggests that you can have a ToS that explains your moderation decisions. But c2A seems to say that you aren't a good faith actor unless you use the objectively reasonable moderations standards defined by the law.
It's not clear that this would even have an effect since clearly you can moderate based on your ToS. A sports site can remove non-sports content even if it isn't obscene.
It seems like the goal would be to say that content moderators are biased in some way and drop section 230 protection based on that pretext, but that'd require a court to find that the moderation is being done inconsistently, and being done inconsistently intentionally. I have a feeling there'll be difficultly proving that.
I could be wrong.
Maybe it’s time for another step.
where so long as it's clearly defined in TOS and moderation policy you can remove it. E.g. fb has a more clear policy that's public that says no false facts that can cause harm.
I'm a 100% skeptical of this administration and the (run on rant ahead) insanely religious zealot by the name of Barr who is terrifying in trying to force his crazy beliefs on all of us whilst creating a monolithic fascist-esque unitary executive who's whim (or that who has his ear last..) controls virtually all aspects of our lives...
but this seams reasonable and seems like it would answer a lot of the problems we see complained about on HN all the time?
right now one of the top posts is about YouTube take down with no info/recourse.
The only thing I don’t like about Barr is it’s hard to see him curtailing the surveillance state, but frankly that applies too almost everyone in politics since those who speak out against the cia/nsa tend to get into unfortunate high-speed car accidents.
Barr himself is horrifically partisan, which as the AG of your entire country is scary as all hell. Just read his speech in November 2019 to The Federalist Society (which itself is horrifying, the societies goals are simply at odds with what the USA has claimed to be).
It's as partisan as it gets.
Or is that okay, because its in-line with your own opinions?
he thinks that religion is needed side-by-side with our individual liberty style government.
read this whole speech it's truly insane i can't believe someone so intelligent (and effective), who was given so much power by Trump, has these views it's truly scary
"But today – in the face of all the increasing pathologies – instead of addressing the underlying cause, we have the State in the role of alleviator of bad consequences. We call on the State to mitigate the social costs of personal misconduct and irresponsibility.
So the reaction to growing illegitimacy is not sexual responsibility, but abortion.
The reaction to drug addiction is safe injection sites.
The solution to the breakdown of the family is for the State to set itself up as the ersatz husband for single mothers and the ersatz father to their children.
The call comes for more and more social programs to deal with the wreckage. While we think we are solving problems, we are underwriting them.
We start with an untrammeled freedom and we end up as dependents of a coercive state on which we depend."
f9 is suspicious how? This document is not intended in anyway to prevent anti-trust actions, so why is a specific exclusion suspicious? There's a lot of public push right now to act on anti-trust issues for large tech companies, so there's no secret there.
5c... is more complex, and I'm not sure it means what you think it does.
c2A does seem particularly problematic to me in the context of trying to combat disinformation which is neither illegal nor obscene, nor violent.
Can you explain more, I'm not really seeing it...
Also how is disallowing shadowbanning problematic? I always never liked it.
If instead you outright ban them, they adjust or learn about your anti-spam algorithms and just keep going.
I'd say this administration has resonated with its base by treating bans as politically-motivated.
Facebook also regularly censors viral pro-trump content and project veritas has exposed this.
I wish it weren’t so, but it is. I’ve experienced it firsthand.
It's tough to take these at face value because we trust and believe in the things that define our ideology and take any challenge to our presentation of that as an attack on the underlying ideology.
Also this sort of stuff can lead to real world violence like when Facebook let violent vigilantes organize on their site it led to Kyle Rittenhouse killing two people.
I get to take down any flyers on my fence, even if I leave up the ones I like. Or I can vandalize the ones I dislike, maybe entirely changing their message. My fence.
No they're not; the government has already shown itself able and willing to do an end-run around the First Amendment by e.g. pressurising payment processors to refuse to do business with such websites.
> The owners of the shadow-banning websites should also have the right to decide what thoughts are exposed on their website.
> I get to take down any flyers on my fence, even if I leave up the ones I like. Or I can vandalize the ones I dislike, maybe entirely changing their message. My fence.
If you're hosting a private website you can privately decide what goes on it. If you're holding yourself out as a public communications provider and want the benefits of section 230, you're being granted special privileges by society and you need to hold up your end of the bargain by hosting the kind of uncomfortable discussion that society needs.
I cannot see how a change to a law that's proposed by the enforcement arm of government will actually protect people from abuses done by that same enforcement arm. If they act in bad faith now, why assume good faith will follow?
>you need to hold up your end of the bargain by hosting the kind of uncomfortable discussion that society needs.
I don't think Amazon's user reviews are where uncomfortable discussions need to happen. But I do think the government demanding what should be discussed on servers owned by private citizens is a clear violation of the First Amendment.
This seems like an argument that no government will ever reduce its own power, and so constitutional protections, due process etc. are all pointless. Even a single arm of the government is far from a monolith, and the boundaries of what government should and shouldn't do are always evolving. I'm sure this proposal isn't coming purely from the good of this administration's heart; part of it is public pressure, and part of it is the consideration that they may no longer be in power come November. But realpolitik is always a factor; good laws are still good laws.
> I don't think Amazon's user reviews are where uncomfortable discussions need to happen.
I'm sure a lot of vital IRL political discussion happens in the checkout queue at the supermarket (I'm sure that sounds like a joke, but I'm completely serious). As life moves online, we need a corresponding public sphere.
> But I do think the government demanding what should be discussed on servers owned by private citizens is a clear violation of the First Amendment.
Private citizens acting in their capacity as private citizens are still free to discuss whatever they like, or ban whatever discussion they like. If you want to have a capaciously moderated website that's fine, but such a website will not and cannot be a Section 230 public communications provider.
Type 1: Errors, ignorance, human foibles. Primarily annoying/asshole behavior.
Type 2: Malice and malformed content. Spam, propaganda, trolling.
I can argue against shadow bans for the first type. For the second type, any evidence of your operational method is a data point to break the moderation barrier and infect/manipulate users.
If a spammer knows its been banned, it switches over to another account. If a troll knows where your ban lines are, they come back and stay exactly on the edge to trigger someone or make them fall over the line.
Holy shit I did not expect this government to get something so nuanced and difficult so perfectly right.
Also clarification on unlawful and promotes terrorism language is a nice touch.
Thanks for the write up btw!
Can we please just do this one on its own either way? This has been a real problem online with companies like Cloudflare offering hosting to websites engaging in these areas. These three are explicitly illegal and yet sites that harbor this content, especially cyber-stalking sites like Kiwi Farms (47 U.S.C. § 223), are still somehow online.
There are two sides to this though. If you make a carve-out for ignorance you incentivize ignorance.
I think the argument is that if you’re not able to moderate your user-generated content at the most basic levels like running image hashes against the CP database then you shouldn’t be hosting it.
Then surely some minimum level of CP detection should be part of this section, right? If the requirements here are not defined well enough, then any company, from the smallest startup to a behemoth like FB, could be liable for some CP shared through the platform in a novel way that would have been impossible to detect.
Force providers to have a reporting system that feeds back a unique case code that can be quoted as evidence of knowledge.
Then they have x days to investigate and respond.
I think there may be a light, or a deeper darkness, that comes out of this though. We may see a lot of investment in automation for catching this kind of content.
I'm not involved with HN, but it seems likely that many smaller venues esp ones that aren't big money makers, including most mailing lists and small sites like HN would be advised to discontinue operating if exposed to this kind of extraordinary liability over content which they had no knowledge of.
"Actual knowledge" should be your preferred approach to your concern... but nothing will probably solve your problem because extremely well funded platforms are substantially immune to the law in any case, just as they're immune to common decency.
No, they’re not social network type services like Facebook or Twitter, but... section 230 doesn’t discriminate between types of online services!
It's not all chatrooms and social media. Restricting the internet to be run by people able to manage their own websites would hurt. Ebay would have to manually review every account and listing. Good luck finding a user review website like Rotten Tomatoes. No more Straw Polls. No more GitHub.
It goes deeper than that, I think. Do VM hosting companies also rely on Section 230 immunity? Do ISPs? (They’re not Title II common carriers anymore!) Would providers of this nature be required to monitor what their users do as well?
I don’t believe either of these have been tested in court, but I think there’s at least the potential here to make it much, much harder to manage a website.
And automated systems have their own huge problems.
Fixed that for you.
You mean...Samaritan? The Samaritans were the Nazis of their day and the legend is about a Samaritan that rose above his race and did what was good - thus, the one Good Samaritan.
You wouldn’t say Bad Nazi. You just say Nazi and Good Nazi.
Someone already corrected you on facts, but I just want to say that the premise of "rising above his race" is nonsense to me. Why would you think this way?
Carve-Out for Actors Who Purposefully Blind Themselves and Law Enforcement to Illicit Material
>>...it makes little sense to apply “Good Samaritan”
immunity to a provider that intentionally designs or operates its services in a way that impairs its
ability to identify criminal activity occurring on (or through) its services, or to produce relevant
information to government authorities lawfully seeking to enforce criminal laws. A Good Samaritan is
not someone who buries his or her head in the sand, or, worse, blinds others who want to help.
>>One important way to confront the grave and worsening problem of illicit and unlawful
material on the internet is to ensure that providers do not design or operate their systems in any
manner that results in an inability to identify or access most (if not all) unlawful content. Such
designs and operation put our society at risk by: (1) severely eroding a company’s ability to detect and
respond to illegal content and activity; (2) preventing or seriously inhibiting the timely identification of
offenders, as well as the identification and rescue of victims; (3) impeding law enforcement’s ability to
investigate and prosecute serious crimes; (4) and depriving victims of the evidence necessary to bring
private civil cases directly against perpetrators.
>>We propose making clear that, in order to enjoy the broad immunity of Section 230, an
internet platform must respect public safety by ensuring its ability to identify unlawful content or
activity occurring on its services. Further, the provider must maintain the ability to assist government
authorities to obtain content (i.e., evidence) in a comprehensible, readable, and usable format
pursuant to court authorization (or any other lawful basis).
Is this the end of online privacy as we know it?
From here: https://www.justice.gov/file/1286331/download
I want operators to intentionally design and operate their services in way that impairs their ability to identify any activity, because any exceptions are exploits. Just saying "criminal activity" doesn't make it not spying on Americans.
The essential bargain struck for 230 was that providers get immunity in exchange for policing their system and helping law enforcement. Apple et al removing their ability to police their systems breaks that bargain.
A lot of people don't really seem to understand that about S. 230- services that exist basically to pass messages have never needed the protection and that won't change. The best example outside of messaging services is probably content delivery networks, and I'll pick on CloudFlare because they're a great example.
As long as CloudFlare is fine with just taking money and serving content, no matter whose it is, they have the legal immunity as intended. As soon as they decide (on a whim) to deny service based on content, as they did a while back, that legal immunity vanishes.
It would be catastrophic to their business to make that decision (especially considering everything else CloudFlare hosts), and that's the point.
It's also very interesting considering that the changes also appear to affect Patreon (and similar) and, perhaps a bigger deal, PayPal, since it's an interactive computer service used to fund the creation of information (among other things). In fact, I think that's an even bigger deal, because it's not necessarily traditional platforms banning certain people that causes them to be denied a platform these days, it's the inability of them to get funding in a way that isn't "send cash to this PO Box".
Except that they already do moderate based on content (malware, CP, etc) and don't have this immunity. The only reason they haven't been sued is because no lawyer or legal team is going to take on 8chan or random malware websites, and state-level DOJs don't have bad actors knocking on their door with the public's support behind them.
My understanding is that the hurdle to get something booted from CF is extremely high, which is why it's very well liked in both copy right infringement (torrents) and for general scammers trying to hide their server locations.
Just keep in mind that, when you say it should be, what you're saying is that Apple should make iCloud Backup work in such a way that, if the user loses the keys, Apple is completely unable to help them recover their data.
And, I don't agree. I can support it as an opt-in feature, but wouldn't use it; I expect sensitive applications, such as Signal, to encrypt their backups as indeed they do, and would prefer as a last resort to retain my precious exocortex, even if I lose my entropy.
And if I wouldn't use it, most people wouldn't, and Apple is right not to do it that way.
The reality is that there are tons of obvious alternative solutions to user secure key escrow other than "lose access to your data".
Serves me right for not checking the user name before replying...
Not really. Sure the telcos can't sensor individual phone calls, but if they don't like what you are doing they will disconnect you. For example my telco explicitly prohibits me from sending spam over SMS. If I break that, they disconnect me. It costs me money to reconnect which is a huge disincentive to spammers, but more than that, they insist on knowing who I am, that have a "true name" in other words. So not only can they disconnect me, they can ensure I never reconnect.
Two differences with an internet bulletin board is the users are anonymous and posting is free. Thus the owners can never effectively ban people that deliberately set out to harm their business, they can only deal with the posts as they occur. Or to put in another way: until you read his post, you can never know if a new user is a spammer and it cost him nothing to post his spam.
The difference between a telco and an internet web site is really competition, and it's curcial. It costs billions (trillions?) to set up a telco. Worse, there is typically no local competition for land line (and really just a hand full of mobile providers too). So your local land like provider bans you, it's as if you've lost your town water supply, or electricity connection. In that environment a telco shutting you down is indeed total censorship.
That's not true for the internet. There are literally thousands if not millions of outlets. If one does not like your post, another one surely will. Since there is no monopoly you are always able to publish somewhere so no one one person or company can shut you up. In fact, it's not difficult to build your own publishing forum and it costs peanuts, so in fact _no one_ except the government can prevent you from having your say.
230 struck a new bargain for this new environment. What arose under its protection was curated forums targeting particular consumers. Stuff the consumers didn't like (such as SPAM, or bullshit, or leftist, or abusive) it is removed. Because the cost of creating the forum is so cheap the competition for eyeballs is fierce, and what ends up dictating the success of these forums is not whether they agree with posts or their political views, it is how well their curation matches their target market. This equation has been spelt out here on HN time and time again, with comments like "Twitter must have judged the cost of letting Trump post bullshit higher than the readership it gained".
So, 230 solved the "free speech" problem using capitalist competition in its purest form. Where "pure" near zero friction, and a market with near perfect information flow between participants. It seems to work pretty well to me.
It's pretty clear these amendments, g.5(c) in particular, will destroy that balance.
Further, any data about users could potentially uncover illicit activity, meaning a provider could essentially be required to track everything possible in order to avoid ignoring potential illicit content.
Then again, could be this entire proposal to begin with is intended just for so-called "platforms" not messengers.
To me, that says if a company writes something that prevents or blocks illegal content from being accessed by law enforcement, any immunity or protection is removed.
That is, E2E encryption, because it is impossible for someone to eavesdrop usefully by design, is intended to be made illegal.
"...if a company writes something that blocks... content from being accessed by law enforcement [like e2e encryption], protection is removed."
That's exactly how I read this. This is a head-on attack on all types of encrypted applications that would block government from (legally) accessing the user's data whenever they want.
This would effectively remove protections from Signal, iOS, WhatsApp, Keybase, or any other platform offering e2e encryption. It doesn't rule encryption illegal per se, but now the platforms may be held liable for the crimes that happened through their services, which would force them to either take their chances, or shut down, or implement some sort of backdoor.
Does that mean that GEO-restricting content will be made illegal?
What the parent is saying is that you can’t use “the system is designed so that nobody can access it” as an excuse for why law enforcement can’t access it.
If a website has a feature like voice channels or video calls that are peer-to-peer, then for "most (if not all)" of it the website has an inability to identify unlawful content. e.g. Zoom, Slack, Google Voice, and Discord would have to monitor your calls so they have the ability to identify unlawful content.
That would imply there was online privacy at some point, or at least that it was a thing that all users could reasonably understand and achieve.
If users understood that Facebook's business model might eventually require what (in hind-sight) appeared to be multiple privacy violations, but continued to use the service anyway because they couldn't help themselves, they really never had online privacy when using the service.
For those of you down-voting me, I'm pointing out this is a fallacious argument talking about online privacy. The companies who built these huge platforms didn't bake in online privacy when they built and evolved their systems. Talking about it like we've "lost it" is pointless, but I do think it's worth exploring how we can make it better!
How is that going to be determined?
The closest thing I see is subsection (d)(2), which says that the platform can be prosecuted (A) for a "specific instance of material or activity" (B) if it had "actual notice of that material's or activity's presence on the service," unless (C) they remove/block "the specific instance of material," report it to law enforcement, and "preserve evidence related to the material or activity for at least 1 year."
I believe the major commercial E2E platforms generally have the ability to notice specific hashes of known-bad material (think, e.g., child sexual abuse material) and block it / alert the platform through a client-side filter, which I think would make it pretty easy to comply with these requirements.
Alternatively, it would be enough, I think, to remove and ban the accounts involved.
The only difficult part is that you need to "preserve evidence," but my understanding is that this phrasing doesn't generally compel you to create evidence where none existed. Privacy-focused platforms have for years avoided keeping logs that they do not want to get turned over for the government, and it's generally much more onerous for the government to ask you to start keeping logs than to get mad at you for deleting/purging logs you already collected.
So I don't think this actually imposes any requirements on design, or gets in the way of E2E or non-logging platforms. If you are informed of specific illegal content, you need to take action. But if you operate the service in a way that you don't have "actual notice" or "evidence" of anything people send with it, I think that's still fine.
The other carve-outs don't seem to be relevant. (d)(4) might be if you look funny enough: it says the platform has to make itself able to receive notification of illegal content, and that a platform doesn't get immunity "if it designs or
operates its service to avoid receiving actual notice of Federal criminal material on its service or the ability to comply with the requirements under Subsection (d)(2)(C)." I suppose you could argue that not keeping logs means that you've designed your service in a way where you can't "preserve evidence," which would run afoul of this. But I don't think that's the right interpretation: if you're not creating unnecessary logs in the first place, if you keep the logs you do log for a year, you've preserved all the evidence that exists.
Am I being too optimistic here? (I do agree that the plaintext summary you quoted is very concerning.)
Subparagraph (c)1(B) says that the only safe harbor for removing content is (c)(2). (c)(2)(A) restricts the criteria that can be used to remove content to the following: "obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, promoting self-harm, or unlawful".
The current Hacker News Guidelines contain stuff like "Be kind. Don't be snarky". These are broader than the text in (c)(2)(A). Therefore the safe harbor may not apply.
Worse, section (c)(1)(C) implies that removing ANY content by a user could make the forum liable for ALL OTHER content posted by that user, unless there is "good faith", but "good faith" is defined in section (g)(5)(A) to require all moderation criteria to be defined with "particularity". "Be kind. Don't be snarky" could be construed to lack "particularity".
The only alternative provided by this law would be to only remove content according to extremely legalistic moderation criteria. In my personal experience, all high-quality open forums require moderation with a degree of subjectivity with open-ended criteria similar to our Hacker News Guidelines. Given the legal risks of having to go to trial to argue about whether you are a "publisher" of other people's forum comments, it would be foolish for anyone to continue to employ open-ended moderation under this law.
Imagine having your home raided in the middle of the night because someone thought it would be funny to upload illegal content to your startup's servers. Now imagine being bankrupted and sent to prison afterwards.
I joke, of course what would actually happen is your ISP would turn off your connection.
Not illegal, but extraordinarily legally risky. Perhaps YC has pockets deep enough, connections strong enough, and derives enough benefit to take that risk.
Something that is even more of a volunteer run labour of love? less likely.
The new ridiculously named “Bad Samaritan” section is a disaster that wipes out the point of the “Good Samaritan” section and is basically a combination of the mandatory CSA reporting law and SESTA/FOSTA for all federal and state laws. It’s so bad it almost seems like a poison pill.
The section: (1) “BAD SAMARITAN” CARVE-OUT. Subsection
(c)(1) shall not apply in any criminal prosecution under
State law or any State or Federal civil action brought
against an interactive computer service provider if, at
the time of the facts giving rise to the prosecution or
action, the service provider acted purposefully with the
conscious object to promote, solicit, or facilitate
material or activity by another information content
provider that the service provider knew or had reason to
believe would violate Federal criminal law, if knowingly
disseminated or engaged in.
It already is one for all intents purposes, just not one that is used ubiquitously (except in the sense of getting us to buy stuff). All that is changing is that the panopticon is becoming slightly more explicit than implicit, making it that much easier to (eventually) flip a policy switch that designates groups wholesale as "enemies of the people" requiring active scrutiny and interference.
I'm reminded of this cartoon:
Twenty years later and it is still fresh as a daisy.
Bonus clip: https://www.youtube.com/watch?v=UQ6LGrr8iEg
The downside is, context matters. If you report messages out-of-order or with important context deleted, you can trick investigators into thinking something was said or implied that actually wasn't. It needs to be carefully designed.
(I blog about cryptography, but you should ask a cryptographer if you want to design something like this.)
(This is all from memory...)
Going by the original parable, this should probably just be "Samaritan carve-out," or maybe better yet as just "Bad actor carve-out".
To me this is like learning that word for getting scammed is just a dig at the Romani.
That seem to be an untenable position. Legislation is drafted with input from lots of different groups including the agencies that might be enforcing the legislation. Proposing legislative changes isn't infringing on the powers of the Legislature.
Sure, other people vote on it. But it stinks.
> Sure, other people vote on it. But it stinks.
Wait, do you think that police departments and police unions don't participate in drafting laws?
You could make the same argument about most American systems of government, but pointing and saying corruption could exist is not the same as showing how it does.
This just seems like a nonsensical rabbit hole to explore to me.
To be clear, I'm not claiming I hold that position. In fact, I fully agree with your comment in reply to the top level comment we're under.
Also, modifying the language of Civil Liability to include good faith efforts for language that could be deemed "unlawful" is .... sneaky, and again, ripe for abuse.
You know how we have words like compile, build, "binary", or executable? It's the same thing. Expanding the interpretation of the law is expanding the interpretation of a highly technical definition and takes much more skill than saying "I interpret the words this way".
Huh? If a majority Supreme Court decides a phrase "really" means X in a certain context, it means X for the rest of the courts. The "technical jargon of the field" notwithstanding. They're often the source of that jargon.
The Slaughter-House Cases famously by a 5-4 vote reduced the Privileges or Immunities Clause of the 14th Amendment to a dead letter only 5 years after its enactment. A handful of years later it specifically held that despite the 14th Amendment, the First and Second Amendments didn't apply to the states. But then, despite no actual relevant change in the Constitution itself, the Bill of Rights began to be applied to the states by the Supreme Court in the 1900s, through the somewhat roundabout method of the Due Process Clause instead.
Or choose some other example, if you prefer. The "reasonable expectation of privacy" standard that has formed the basis of Fourth Amendment law for decades rests on the court's novel interpretation of the stubbornly unchanged words of the Fourth Amendment in the 1960s.
The Supreme Court wields huge power to interpret the law untethered to any pre-existing rule, if it so chooses.
Is there a case you’re referring to? Because in reality, it’s not uncommon for two “shall not be infringed” sections of the law or Constitution to come into conflict.
I fully understand the social context of the time, (shortly having come to pass after the events of the Valentine's day massacre) but find the entire logic behind it flawed, and open to challenge on the grounds it's essentially a poll tax (unreasonable barrier to entry on the exercise of a constitutionally guaranteed right) predicated on the Federal power of taxation of interstate commerce, which is it's own bag of shakyness.
I was reluctant to even post it because it almost always devolves into a whining match that no one is infringing anything, until you add the "closing" of the Machine gun registry in '86 into the picture where the registration requirement creates a de facto ban on civilian ownership/production of automatic firearms for lawful purposes because Congress has mandated no money be spent updating or maintaining the registry; leaving it open yet non-functional, constraining supply of legally transferable automatic weapons to those produced and registered prior to 1986. That notwithstanding there's a lot that has been hung on the coat rack of that entire vein of politicking that just smells to high heaven to me.
But we aren't talking about that, we're talking about Miller, being the one case in which the Supreme Court leaned so heavily on a qualification that an arm must be kept and used consistent to the prefatory clause of the Second Amendment, thus cementing the next 50 odd years of slow methodical encroachment on firearm owner's rights to keep and bear arms until Heller reversed the stance, and explicitly acknowledged the non-modificatory nature of the prefatory clause on the operative clause of the United State's 2nd Amendment.
Note I'm not opposed to some level of tracking/registration of certain firearms in general; just not combined with the wild gesticulations that have been employed to create de facto bans and excessively high barriers to entry to possessing, fabricating, or doing business in firearms. To me, the 'keep' part of the 2nd Amendment covers the right to fabricate replacement parts as needed, even receivers, but in the eyes of the Law, the act of fabricating or production is separate from the Act of keeping (possessing). Hence to meet with my standards of keeping, you not only have to pay a $200 tax, you have to pay an appropriate recurring SOT ($2000ish last I checked), which also requires you to essentially do business as an FFL of some flavor, and to structure your life around what should be as frictionless and routine an interaction as humanly possible because otherwise the Federal government will unilaterally decide you don't really need that right to "keep" (to my standard remember) those arms because you're not engaging in enough interstate (or intra-state commerce due to some effing grain taxing case that SCOTUS ruled on that established the interstate commerce clause granted Federal regulatory authority on intrastate commerce if that stuff had a reasonable chance of effecting the interstate market environment) for it to be a slam dunk case that Federal Law Enforcement can dunk on you for effecting or attempting tax evasion in violation under jurisdiction granted by the interstate commerce clause, which is really being employed as a workaround to clamp down the number of automatics or undesirable firearms, and disenfranchise any poor sod who doesn't sweat the details enough of their right to vote via a felony firearm charge.
I've spent entirely too much time thinking on this sort of thing. Especially since I only own a Mossberg, but it's the principle of the thing. I downright object to any implementation of something that requires an average person to navigate that many layers of indirection for something that should just be straightforward.
The phrase in question has a Wikipedia page: https://en.wikipedia.org/wiki/Subjective_and_objective_stand...
And a BAR exam study page:
You don't get to just change the meaning of such phrases on a whim. Supreme Court or not. You'd be re-interpeting hundreds of years of case law for the sake of extending a single decision, which is easier and more subtle to do in a myriad of ways.
It's like worrying a programmer is going to redefine the meaning of the word compiler or something.
It is not. “Judicial review” is just applying the heirarchy of laws top-down from the federal Constitution, and every court in the US federal system does it. Orders striking down federal laws as violating the Constitution often originate from District or Circuit Courts. Supreme Court involvement is not necessary for judicial review.
Techdirt's headlines reflect exactly what's in the article. Instead of click-baity, a reasonable person might call that accurate.
Mike Mansick's coverage of complex legal articles is extraordinarily good. He's one of a small number of journalists who make complex legal understandable without butchering or omitting relevant details.
As for the Daily Outrage, well, okay. Techdirt covers outrageous behavior. TDA is a little simplistic but it's not off base.
A relevant side note: I've been calling out biased reporting for 30 years. Not because it makes my bad team look bad but because addressing bad behavior unequally provides nurturing spaces for it to thrive. Techdirt is one of the few publications that consistently called out bad behavior by the Obama administration - sometimes it was the ONLY publication doing so.
I didn't want to see Obama vilified or lionized. I wanted corruption outed and problems fixed and I really don't give a damn who the PotUS is.
Feel free to respond here with other news publications that don't change their national coverage methods, depending on who's holding the White House.
"Carve-Out for Actors Who Purposefully Blind Themselves and Law Enforcement to Illicit Material" The recommendations suggest that sec. 230 protections not be extended to platforms that intentionally structure themselves in a way to make giving information to law enforcement difficult or impossible. This probably bodes poorly for private by design forums with aggressive log flushing policies (I'm specifically thinking of things like 4chan, which claims to permanently and irrevocably delete data aggressively).
>>One important way to confront the grave and worsening problem of illicit and unlawful material on the internet is to ensure that providers do not design or operate their systems in any manner that results in an inability to identify or access most (if not all) unlawful content. Such designs and operation put our society at risk by: (1) severely eroding a company’s ability to detect and respond to illegal content and activity; (2) preventing or seriously inhibiting the timely identification of offenders, as well as the identification and rescue of victims; (3) impeding law enforcement’s ability to investigate and prosecute serious crimes; (4) and depriving victims of the evidence necessary to bring private civil cases directly against perpetrators.
To bring it back around to the topic at hand, I think this enforcement action is a mistake.
I'm not aware of how often it's used/successful, though.
The whole premise of investigating for "bias" is clearly designed to be abused - similar to HUAC asking you to prove that you aren't a communist
If you're on Twitter's good side and retweet somebody with "get em!", you're good. If you do the same while not aligned with Twitter politically, you're asking your followers to harass individuals and will be punished.
If these proposed changes are enacted, I await the catch-22 where an "online platform" is sued in relation to the same content; first where they "censored" something and then had to put it back online, and second as "knowingly facilitating criminal activity" because it's online.
The DoJ is on step 1: https://www.usa.gov/how-laws-are-made They simply have a louder podium to announce their idea.
The law of the jungle?
Also, the DOJ really should back off here. They’re an article II department, their job is enforcement and not legislation. If the law should change, that is emphatically congresses responsibility. They can recommend all they want, it should be valueless though.
> They can recommend all they want
Isn't that what they are doing, recommending? Your statements seem contradictory.
Not like Barr's words: "For too long Section 230 has provided a shield for online platforms to operate with impunity. Ensuring that the internet is a safe, but also vibrant, open and competitive environment is vitally important to America." Because those are Trump/Barr opinions or value judgments about how good or bad the recent state of affairs has been and about what is supposed to be important to America.
Law enforcement isn't supposed to set the policy objectives. They're just supposed to implement them.
No, I mean that the DOJ's opinion on what the law should be is literally without merit, and possibly is worth even less than that. The creation, implementation, and adjudication of the laws are separated into distinct branches of government by design. I do not want the group that is responsible for enforcing the laws weighing in on what they think the law should be; one does not ask the group that will wield the power what powers they ought to have if one wants it to end well for everyone else.
I would be much happier if the DOJ stuck to enforcing the laws as written and would prefer if they would kindly shut up and go away on the issue of what laws they think should be written.
Edit: To be clear, I also think that Barr is full of it. He's blown any credibility of being anything but a partisan hack. But even with a different AG and a different administration, the idea of the DOJ recommending to Congress what powers it should have strikes me as very much a bad idea, and I would say that even if I was otherwise happy with the administration suggesting it.
Same for the CDC, FDA, EPA, IRS, HHS, etc.?
I suppose the President should also be forbidden from proposing legislation and for asking any of his agencies for proposals also?
This is an untenable position, IMHO.
I’m saying that we should tell those who will wield the power to pound sand when it comes to what powers they should get. Ideally the president should not be proposing legislation, that is not their role in this democratic republic. One could argue that the centralization of legislative agenda making into the executive is one part of why presidential elections are such a high pressure situation these days; ideally that should be handled by the deliberative body and not the executive.
That being said, there is a huge, massive, unbridgeable chasm between the DOJ, who is not only capable but expected to send men with guns to either detain you or legally shoot you if you resist and every other organization you listed. The risks of abuse of power within that specific organization are massive, which is why traditionally there’s supposed to be a bit of a gap between the president and the DOJ to reduce the risk of politicization of the latter.
I understand the general conflict of interest you are concerned about but that is why there are separate branches. The executive branch can only propose and/or respond to inquiries, they can't actually introduce legislation.
(I'm just going to ignore the rabbit hole of regulatory contruction here as that is another can of worms)
And yeah, the devolution of quasi-legislative ability to regulatory bodies is a serious ball of wax that I haven’t managed to formulate a coherent solution to.
It says providers cannot purposely "turn a blind eye" to potentially illicit traffic, i.e., cannot choose not to track and record such data. Yet any data about individual users and their behavioral patterns has the potential to help reveal illicit activity.
Meaning a provider could essentially be required to track everything possible in order to avoid ignoring potential illicit content. Keystrokes? Absolutely they could reveal illicit activity. By deciding not recording them, a provider is turning a blind eye to that possibility.
It seems like completely normal activity to me.
Redlining an existing statute with changes is what the DOJ published and is closer to what I was getting at when I said "participate in proposing legislation".
I’d be interested if they always came with a press release and public quotes / pressure - but maybe that’s in the ACU report. Haven’t gotten all the way through it yet.
Agencies don’t get funded to write the law that gives them the mandate to write the regulation. That’s a classic bureaucratic self-licking ice cream cone.
How common is that already? It seems kinda similar to the president preparing and proposing a budget to Congress.
It’s perfectly fine to push your point of view online as it’s protected by the first. But you should bare consequences for the cases not protected by the first.
1A does not give you protection from religious discrimination among persons and corporations.
It is the Civil Rights Act that does.
Clearly at some point someone said "freedom of religion should mean protection from persons and corporations", why can't we say the same about speech?
A salesperson should be able to be fired for shit-talking their own product. A customer service representative should be able to be fired for treating a customer inappropriately. The editor of a magazine should be allowed to edit contributors' articles. Putting speech on the same level as a protected class is ridiculous.
But can definitely fire him for anything he does, right? Like having a pride flag on his backpack? Or committing the speech act of saying "I'm gay"?
Someone mentioning they’re gay after being asked if they have a wife would be a very different situation from someone who, for instance, is engaging people in inappropriate and unwanted discussions of sexuality.
The law is quite simple: was the person fired because of their membership in a protected class?
I mean, what other answer is there than what’s implied by that very question: it’s because “legislators haven’t made a law”.
Now you can believe all you want that there’s been a centuries-old conspiracy by legislators to not ever draw up and approve this kind of law. But maybe you should consider that the other things mentioned in the 1st amendment, such as freedom of press, lobbying, and assembly, also aren’t “codified at the same level” as religious protections. And maybe you’ll realize that there are obvious differences in how religion is perceived to be different than the 4 other things protected by the 1A.
Imagine if someone ask you "Why is this Apple red?", and then you responded by saying "Well obviously, it is red, because it is red!"
Would this response be considered by anyone to be anything other than a pretty extreme bad faith response?
I think it is pretty obvious what the other person was asking. And yet, you responded... in the way that you did.
To use your analogy, it'd be as if u/deadmik3 were asking, "Why must apples be called 'apples', when oranges are called 'oranges'?" It's a question that isn't coherent enough for a single answer, as it seems to be based on flawed foundational assumptions on the part of the asker, e.g. "Who says you 'must' call apples 'apples'" and "Do you actually think English people named oranges after the color, 'orange'?"
Here's deadmik3's original question/assertion :
> Why is speech the only part of 1A that gets this treatment? You wouldn't say the same thing about religion
I would've thought u/kube-system's response was clear enough (e.g. it's the Civil Rights Act, not the 1A, that protects religion), but apparently it hasn't been. So I genuinely don't know what deadmik3's issue is. Do they think laws (and/or the process of making them) are merely a "semantic" concept? Do they think that religion and speech, being in the "First" amendment, confers to them a special overriding priority (i.e. in the way that being in Amendments 2-27 do not)? And if so, have they considered that the 1A explicitly mentions 3 other freedoms – press, assembly, and petition/lobbying – that, like speech, do not have the protection for religion?
Without knowing the presumptions behind their confusing question, it's hard to answer or otherwise debate it. I mean, the natural rebuttal would be to point out that the CRA's protections for religion is far from clear cut and indisputable – has deadmik3 never heard of the gay wedding cake case, which after 6 years ended in a narrowly defined Supreme Court decision?  – which means that similar protections for free speech would be even more contentious and logistically complicated, which is likely a key factor why that legislation doesn't exist/has never passed.
But why get into that if someone believes lawmaking is a semantic designation, rather than an actual process that requires considering how a law (and its enforcement) will actually operate in reality?
The semantics, in this case, is when someone is trying to ask "why have lawmakers not done X yet?"
And then the response to this question is "its not the law!"
The original question was quite clearly asking for a justification or reason as for why lawmakers have not done something.
And then the semantic response, that ignored the very obvious question, was to say "this is what the law is!".
We can. We just haven't yet, and it is not clear that it would result in a world that most Americans would prefer to the one we live in. And like religion it would be subject to lots of tension and litigation about the speech of the corporation's owner vs. the speech of the corporation's customer.
If a platform wants the liability protections for being a platform, then we can force them to not act like a publisher, or we can take away those platform protections.
Why should we force websites to choose between being a publisher and a completely unmoderated platform? Why do people keep parroting that line with zero justification as if it's self-evident? You are commenting right now on a website that is heavily moderated, a website that could not possibly exist if the admins faced personal legal liability for any illegal content an anonymous commenter posts on here. If you don't think Hacker News should be legally allowed to exist why are you posting here?
Nobody outside of a fringe group of edgelords wants their favorite Internet communities to turn into 8chan. But the legal regime you are suggesting would make any other kind of website that hosts user generated content effectively illegal.
If our concerns are facebook, instagram, reddit, and maybe 20 others, let's not constrain ourselves with fundamental rules that try to group them in with an obscure independent forum site that houses a few hundred members
For example, calling for the murders of specific people based on their political views or races don’t fall under the first. And social medias allow the spread of some of these messages with no consequences at the moment.
Someone randomly spouting off that people of a race or ideology should be wiped out doesn't always/exactly pass this legal test.
I don't know what the exact legal definition of imminent is, but the layman's definition involves the thing happening soon.
It may well depend on how close to 5 it is (but what timezone?).
The same sentence with the words "right now" would almost certainly meet the test though (assuming the action was actually likely to occur).
E.g. "We should go harm X" is arguably illegal, since it's an immediate call to action.
However, "It'd be great if X died" or "All Y should die" are certainly protected.
In the same vein, this is why "Punch a Nazi" is totally legal: assault is illegal, but you're not immediately inciting a lawless action. "Let's go punch that Nazi", less so.
If you write a book advocating for violence against x or y group or individuals that is permissible, but if you were in a crowded square and advocated the same thing when those targets were also in the square and it is likely that your incitement will lead to violence then it is not. That's incitement, it's imminent, and it is for a lawless act. But again, if you did it at home on your blog in some nebulous sense that isn't likely to cause some specific event then it is protected speech.
An important distinction here is that "true threats" are a separate category from what we are talking about. A true threat doesn't have a "likely" or "imminent" component and so is even broader in scope than violent speech in general. That is, true threats are not protected.
The government is the entity that is not allowed to restrict legal free speech. Private corporations are not bound by the same rule; they can restrict all they want.
Or somehow UPS is different from a social media entity. Then what legally is a social media entity?
These are interesting times. The rules will certainly change; it remains to be seen if they will ultimately change for the better.
So in your framework, transmission of user content + advertising based on that content = culpability. I wonder if social media companies would figure out a way to legally prove transmission of a message without viewing its contents as a way to avoid culpability and maintain some level of profitability.
 Lawyers hate small pockets.
Judging from patent trolls (technically their lawyers) suing small startups, I don't see how thats the case, unless each website was serving <100 people and making 0 revenue.
It's funny how conservatives were totally on board with deregulation and perfectly fine with corporations steamrolling every one else as long as they were aligned with conservative objectives. Then the moment a powerful corporate faction with liberal-ish sensibilities emerges, they freak out and abandon all their fake principles and run crying to the nanny state to save them from the big evil corporations.
That doesn't make much sense in this context - the "nanny state" refers to the government taking care of your physical needs, like a nanny. But even if you sweep all government regulation under the blanket of "the nanny state", that's still not what's being proposed here: what they're proposing is removing protections previously afforded by the government because they've been abusing them for so long. In essence, all that's being proposed is that everybody plays by the same set of rules.
2. The ‘if you want a company to be run differently, start your own’ argument is tiresome and weak. There are (and should be) many mechanisms to influence corporate behavior.
3. Even the pre-Trump Republican Party has long taken flack from libertarians who essentially argue that one core principle should guide their political philosophy.
3B. Personally, I have not found a strong philosophical grounding to claim that political philosophies should be reducible to one core thought from which everything neatly derives. (That would be nice, wouldn’t it?) In my experience, figuring out public policy decisions is fundamentally more complex than that due to the interplay of conflicting values and moralities.
this isn't really true of libertarianism even. the word "aggression" from the NAP does a lot of heavy lifting and is subject to a lot of different interpretations.
Initiating or threatening any forceful action against an individual or their property
Is your complaint that the boundaries of "threatening" are too squishy?
suppose you see me walking around town with a rifle. is that threatening? maybe not if you're comfortable with open carry, but what if I do it on the sidewalk in front of your house?
is it a violation of NAP to not wear a mask during a pandemic? what if I've already tested positive for covid and am refusing to quarantine? or what if I know that I have a detectable viral load for HIV and have sex without informing my partner?
another interesting example: if you accept the claim that racist speech is an implicit threat of violence, you can use NAP to justify deplatforming.
you can make NAP imply almost any position you want, depending on how you interpret it. only a very specific and narrow interpretation implies the typical positions held by (US) libertarians.
Basically if you don't flat out say "I'm going to do X", that's not really counted.
But I can see why it's confusing for an outsider.
The ‘non-aggression principle’, in my experience of libertarianism at least, is not as central / common across libertarian writings.
> Libertarianism (from French: libertaire, "libertarian"; from Latin: libertas, "freedom") is a political philosophy and movement that upholds liberty as a core principle. Libertarians seek to maximize autonomy and political freedom, emphasizing free association, freedom of choice, individualism and voluntary association. Libertarianism shares a skepticism of authority and state power, but libertarians diverge on the scope of their opposition to existing economic and political systems. Various schools of libertarian thought offer a range of views regarding the legitimate functions of state and private power, often calling for the restriction or dissolution of coercive social institutions. Different categorizations have been used to distinguish various forms of libertarianism. This is done to distinguish libertarian views on the nature of property and capital, usually along left–right or socialist–capitalist lines.
to be clear, I certainly don't intend to shit on libertarianism. I'm far from an expert on the philosophy, and I do feel libertarians make a lot of valuable contributions to political discussions. I wouldn't want to live in a world where a libertarian got every single thing on their wishlist, though.
My #3B point emphasizes this question: ‘Is simplicity best? Or simply the easiest?’ (to quote a song)
In my view, the respectability of private mortalities is not strongly correlated with the simplicity of their core principles. (For background on what I mean by public and private moralities, see writings by Robert Kane, such as ‘Through the Moral Maze’)