Hacker News new | past | comments | ask | show | jobs | submit login
Twitter locks out Irish Bishop after he criticized euthanasia (reclaimthenet.org)
222 points by URfejk 12 days ago | hide | past | favorite | 195 comments





Why does it matter if the flagging is automatic? Does that somehow make it more acceptable?

The popularity of this view makes me wish more people read Kafka. A future tyranny might end up not being Orwell's 1984 or Huxley's Brave New World, but a kafkesque nightmare where people are lost in a world of AI giving out absurd punishments. [1] Kafka's book The Castle [2] is essentially about this, although I think his aphorisms and short stories are much better than his novels.

Quoting from Wikipedia:

> The villagers hold the officials and the castle in high regard, even though they do not appear to know what the officials do. The actions of the officials are never explained. The villagers provide assumptions and justification for the officials' actions through lengthy monologues. Everyone appears to have an explanation for the officials' actions, but they often contradict themselves and there is no attempt to hide the ambiguity. Instead, villagers praise it as another action or feature of an official.

Replace officials and castle with AI and it is almost the exact same scenario.

1. This is already happening: https://www.wired.com/2017/04/courts-using-ai-sentence-crimi...

2. https://en.m.wikipedia.org/wiki/The_Castle_(novel)


> Why does it matter if the flagging is automatic? Does that somehow make it more acceptable?

Neil Postman discusses this at length in Technopoly: The Surrender of Culture to Technology (1992):

> Naturally, bureaucrats can be expected to embrace a technology that helps to create the illusion that decisions are not under their control. Because of its seeming intelligence and impartiality, a computer has an almost magical tendency to direct attention away from the people in charge of bureaucratic functions and toward itself, as if the computer were the true source of authority. A bureaucrat armed with a computer is the unacknowledged legislator of our age, and a terrible burden to bear.


AI is a new wrinkle in an old trick to distance elected officials from accountability. After the electrical blackouts last week in Texas, the weather-worthiness of power generation facilities was framed as a technical decision that engineers made, instead of a political decision to weigh costs and benefits of regulation. When the decision is framed this way, the public's attention is directed away from the elected officials who are supposed to be accountable, and for which we have created straightforward mechanisms of accountability. Public anger dissipates in a fog of uncertainty of how these unknown, anonymous engineers could be held accountable and what kind of accountability they should actually have. AI plays exactly the same role.

Funnily enough, Das Schloss is often interpreted to be a criticism of the church and organized religion in general. The castle and its workings are seen to be like the clerical teachings, arbitrary, incomprehensible and illogical, but nonetheless highly respected by the plebs.

Eh, I think Kafka was infinitely more subtle than that. Framing the story as a mundane criticism of the Church seems to be missing much of the depth of his writing.

On that note, Kafka was also quite interested in Kabbalah, Jewish mysticism, which is almost inversely related to sociopolitical structures.

https://en.wikipedia.org/wiki/Franz_Kafka_and_Judaism

Here's a great aphorism of his that's related to religion:

“Leop­ards break in­to the tem­ple and drink all the sac­ri­ficial ves­sels dry; it keeps hap­pen­ing; in the end, it can be cal­cu­lat­ed in ad­vance and is in­cor­po­rat­ed in­to the rit­ual.”


How does the specific institution being ridiculed matter at all?

Whether it's the Catholic church, Youtube or the DMV the point is it's a black box of procedural spaghetti that too often produces insane results and is seemingly unaccountable.


It doesn't really matter, but it is a funny coincidence that a cleric, as a representative of one kafkaesque institution is being censored by Twitter, another kafkaesque institution.

There was a time when the clergy was Catholic, but might various "scientific experts" be filling that role today?

Great point. This can be applied to any type of "revealed wisdom" which is not obvious to the layperson. We are expected to accept their conclusions not question them.

Reminds me of the greatest scene to ever appear on TV

https://youtu.be/Zgk8UdV7GQ0?t=60


The role of the church 400 years ago is now similar to that played by the technocracy today.

Why does it matter if the flagging is automatic? Does that somehow make it more acceptable?

The problem is very simple. Unless social media companies either stop moderating content, which will kill their profits because advertisers will run away, or pay people to moderate everything, which will kill their profit because it'd be wildly expensive, they'll continue to do the half-assed version of moderating they do now using AI and actually make a profit but also make stupid mistakes like this.

Computers and people don't need to operate perfectly. Expecting that is silly. So long as companies deal with errors reasonably it's all good.


> So long as companies deal with errors reasonably it's all good.

We all eagerly await this utopia.

Or maybe expecting that is silly, given the many years of empirical evidence?


There’s plenty of counterexamples one who might wish to dismiss the idea that technology can reasonably manage things, but all things considered, given the number of _potential_ instance of mishap that do not regularly occur any the relatively minor consequences of the mishaps that do occur, I’d conclude that on the whole arguing the original poster’s position is not inherently ‘_silly_’.

> There’s plenty of counterexamples one who might wish to dismiss the idea that technology can reasonably manage things

Are there? How many historical precedents are there for "curation" at the scale of today's tech giants: Facebook, Twitter, Google, Apple App Store, Amazon.com? IMO none of them do it well, or even reasonably.

> given the number of _potential_ instance of mishap that do not regularly occur

We only hear about the "notable" cases. Every day, people are accidentally or intentionally banned by Twitter or one of the other tech giants, and nobody ever hears about it, because these people are not famous and have nobody else to speak for them and raise a public fuss.

> any the relatively minor consequences of the mishaps that do occur

Regardless of one's opinion on the matter, you can hardly call the banning of the POTUS, for example, from the world's largest and most important social networks as a "relatively minor consequence". This is very consequential. You may think the consequences are good, or you may think they're bad, but it's consequential.

If these "mishaps" are not consequential, why are we all here talking about them?


>> stop moderating content, which will kill their profits because advertisers will run away,

One common trend for decades has been that "conservative" advertisers quickly acquiesce once they see that their customers do not share such values. Once upon a time major brands wouldn't go anywhere near sexualized content. Now coke/pepsi/ford will pay millions to get their into a Miley Cyrus video. Once upon a time major brands didn't want to be associated with "shock jock" radio hosts. Then Howard Stern made that acceptable and they all jumped on board. Advertisers today don't want to be associated with unmoderated websites. That too will change as such websites become a norm.


Sure, 1922 Coke advertising used women in a swimsuit, that’s about as sexualized as a Miley Cyrus video relative to the time period. If anything their very consistent in targeting various segments over time.

That said, US social norms get more and less restrictive over time. Dueling for example used to be perfectly legal as was harsh physical punishment for children. What we think of as progress is arguably just increasing similarity with our current views. Continue the same mutability and 2200 might seem much worse to us than today is, but they would similarly think of right now as unbearable.


>> 1922 Coke advertising used women in a swimsuit

There was far more sexualized content available in the 20s. A girl in a 20s-style 'swimming costume' was the the advertiser acceptable limit at the time for mass media. A Miley Cyrus video would be the sort of content available at 1920s peep shows in bad neighborhoods, content from which advertisers then stay far away. Today they readily endorse it.


Those peep shows where closer to strip joints/late night cable than Miley Cyrus. Full frontal nudity etc.

She’s a little past prime time TV, but hardly porn.


* Advertisers today don't want to be associated with unmoderated websites. That too will change as such websites become a norm.*

Or, to put that another way, stopping moderation would lead to a few quarters of negative growth, which would cause shareholders to riot, try to sack the board, etc, and that is exactly why it'll never happen.


> Computers and people don't need to operate perfectly. Expecting that is silly. So long as companies deal with errors reasonably it's all good.

That depends a lot on the lead time for dealing with the errors, and the magnitude of the negative impact the error has.

The lead time for Google dealing with their errors seems to average just below infinity. Twitter, admittedly, is quite a bit better.


> or pay people to moderate everything, which will kill their profit because it'd be wildly expensive

This gets repeated ad nauseam whenever excessive automated moderation is questioned on big platforms, but I have yet to see any actual evidence of it. I don't believe for a second that the likes of Google, Facebook, or Twitter can't afford to turn down their algorithms and significantly increase their manual moderation efforts.

Platforms like Reddit and Discord get along alright with limited automated moderation combined with a federated mod structure where most community-specific moderation is handled by unpaid volunteers. Those platforms are gigantic, but they don't make the front page of HN daily for illogical, heavy-handed moderation practices.

Besides, nobody expects all moderation to be 100% manual. A lot of content can be reliably identified and moderated by an automated system. Automated systems can even still help with things which they cannot be certain about. Instead of removing the content and taking punitive action against the user, an automated system can just bring the questionable content to the immediate attention of an organization of humans.

Not to mention we're talking about some of the wealthiest companies in the world here. Do we have any idea how much they currently spend on manual moderation? Professional content moderation isn't exactly a highly compensated position. I suspect they could increase their moderation workforce by a couple orders of magnitude without the slightest concern about breaking the bank.

The big social media platforms complain that significant manual moderation impossibly expensive, but they'll say just about anything to save a dime. That's the real reason they rely so heavily on automated systems - to save as much money as possible. As long as it doesn't cut into their revenue, they'll keep doing it. And if people buy their crap about it being impossible, that suits them just fine.


>Platforms like Reddit and Discord get along alright with limited automated moderation combined with a federated mod structure

Reddit is constantly criticized for the effects of its moderation policies, both in how arbitrary they can seem, and how they allowed genuinely law-breaking content to stay up at the discretion of those "volunteer" moderators. They are practically constantly in the news.

(Discord exclusively has invite-only communities, so it's a totally apples-to-oranges comparison)

While I would say that reddit's behavior is at least somewhat preferable to the topic of discussion, claiming it's without a similar level of criticism just means you don't actually use the platform.

It's also worth pointing out that what others have voiced regarding AI, that such a strategy just serves to deflect blame from the company, is just as true when relying on community volunteers. I suspect that the main reason that they don't make the front page of HN specifically is because "human makes a bad decision" is several orders of magnitude less interesting to this site than "AI makes a bad decision", regardless of the relative social harm.


This is like the negative opposite of the happy coincidences that need to happen for a startup or new idealistic organisation to take off:

The founder(s) have a crazy, innovative, idea to improve the world, and while they're perceived as mad by the masses initially, it's the first followers who give credence and legitimacy to this idea. So the first employees, initial customers, initial investors, and even just random supporters who independently believe in the value what the originators are doing, and that helps propel acceptance within the larger community and eventually the masses.

This is the authoritarian opposite - really terrible decisions and mechanisms have been implemented and are being repeated on a daily basis by powerful companies or organisations, and while the masses find it worrying initially, there are apparently no shortage of apologists online and offline who are falling over themselves to helpfully explain to the rest of us why this crap is ok:

"It's an AI algorithm"

"They're a private company"

"You don't have to use them"

"Well, they're standing against bigotry"

etc

And then suddenly we find ourselves in an environment not unlike the Kafka-esque scenarios referenced. Be wary of "explainers" - often just they're consensus building, but it's not clear whether they're building in the right direction or the wrong one.



I remember reading this years ago, and it's still as bonkers as ever.

Slightly related:

https://en.wikipedia.org/wiki/Brazil_(1985_film)

"One day, shortly before Christmas, a fly becomes jammed in a teleprinter, misprinting a copy of an arrest warrant it was receiving resulting in the arrest and accidental death during interrogation of cobbler Archibald Buttle instead of renegade heating engineer and suspected terrorist Archibald Tuttle."


how prescient that it was a bug :)

"Do not fold, spindle or mutilate" was written on IBM punch cards.

One difference is that Twitter, Facebook, Google, are not governments and have no real authority over you. They only have the power you give them by choosing to stay in their ecosystem. Therefore it's a strain to refer to them as officials.

If you're worried about them _becoming_ officials, then act now. Leave these platforms and convince others to do the same.


Well, in a literal sense, no. But they certainly have the ability to:

- Exclude you from the largest (and nearly monopolizing) media channels. Good luck getting an online business off the ground when an AI filter bans you from YouTube, Facebook, Twitter, and Google search results. And good luck contacting Google support to contest it, which is notoriously absent for virtually everyone.

- Have you socially ostracized by labelling you as X bad thing, all without you having any recourse or ability to contest the designation. The fact that stuff like this is legal blows my mind: https://www.cbsnews.com/news/yelp-racist-alert-added-busines...

- Collude to affect government policy and prevent competitors from gaining any footholds. Typical BigCo stuff.


Sticking to the consumer-side of the discussion, the BigCos aren't going to break the cycle. In fact, it's a capitalist crime to do so. Their flywheel requires a steady cattle population to surveil in the fields. When we choose as individuals to remain in their ecosystem, we choose the hidden fees of surveillance capitalism over more straightforward payment models. This is what gives BigCos the muscle to wield against smaller capitalists.

We in our consumer role, by not participating in these business models, are the only truly free actors in the system at this time. The only other way out is for government to get bigger and step in, as it's doing in Australia now.


The government could do plenty of things (using laws written a century ago in the Teddy Roosevelt era) to encourage competition and break up de facto monopolies. They've simply been asleep at the wheel.

> If you're worried about them _becoming_ officials, then act now. Leave these platforms and convince others to do the same.

the assumption that we do still have agency, and can exercise power as a collective "by voting with our feet/wallet" is wrong. we (some isolated heroes and activists who care) might be able to throw some spanner into the works or otherwise help things unravel at a slower speed. But things will still unravel in the same way as Jacques Ellul documented and predicted in "La Technique"[0].

To get rid of them is not possible because the only way out we can imagine is regulation which will further cement their position and status quo. There is no future in which these companies and power structures will not be around. I think it is more likely we see the end of the world than to envision a world where these platforms don't exist (maybe not among everyone my age but certainly among everyone 20 years younger than myself who has never experienced society without the Internet/Web)

[0] The Technological Society https://archive.org/details/JacquesEllulTheTechnologicalSoci...


Yes - but until such a time as we have a healthy ecosystem and interoperability, they are a defacto monopoly and I would assume the "anti-monopoly" people would just do something about it, if only to promote an ecosystem. Instead we live in this weird limbo where we can't criticize these platforms because "it's a private company" whilst still using it as if it's a government/society-level ubiquitous platform.

> One difference is that ... Google ... are not governments and have no real authority over you.

Legally-required quarantine-checking apps or contact-tracing apps in some countries have only been made available through the Apple and Google Play stores. The Android apps often require Google Play Services, which require a Google account.

The same will likely be true of the vaccination-certificate apps that are being planned in some countries (countries intend to allow you to generate limited-time QR codes, not present a permanent paper certificate).

Consequently, staying on the good side of supposedly private platforms is increasingly necessary to be an ordinary, law-abiding citizen.


> If you're worried about them _becoming_ officials, then act now.

If you mean this figuratively, we already have many situations - going back at least to the Arab Spring - of government officials directing policy on social media sites. Secretary Clinton of the Obama Administration spoke about this many times.

If you mean this literally, check out how may high level policy types from social media companies came from and have returned to high ranking spokesman+policy roles in Biden Administration.

But fundamentally, it doesn't require being an official. Even filtering and shaping search results has an impact.. and this has been measured repeatedly. Here's a study from five years ago (aka pre-Trump) which demonstrated it:

https://aeon.co/essays/how-the-internet-flips-elections-and-...


We don't live in isolation, and most of us couldn't. If we want to be part of the social fabric, we're often forced to use these services to some extent, so they have real authority over us.

> future tyranny might end up not being Orwell's 1984 or Huxley's Brave New World, but a kafkesque nightmare where people are lost in a world of AI giving out absurd punishments.

I think the present for some people is already a synthesis of all three and if we're not careful it will be for everyone.


Automatic flagging just means their algorithm is crap. Which is bad, but it's a fixable technical problem in the medium term and the post can be restored in the short term.

If this were deliberate flagging it would mean they were way outside of any reasonable interpretations of their guidelines and users really could no longer post anything and be confident it wouldn't run afoul of those interpretations. This is much harder to fix, and the confusion it wod cause is significantly higher.


Ah... I'm sorry to say, ihsw, but you've been shadowbanned.

The technical problem isn't fixable without AGI.

The vagueness and ambiguity are features, not bugs -- it permits arbitrary punishment. Personally I'd rather they skip the appearance of reasoned arguments and go straight to "Twitter does not approve of this message."

No muss, no fuss.


Alright, let's say you get appointed at Twitter to clean up this mess. You get full authority to do whatever needs to be done.

How would you solve this? Your solution will be criticized by the other HN-ers here :D.


>Alright, let's say you get appointed at Twitter to clean up this mess.

Announce a sale of intellectual and physical assets and close down. I would realize the company is a negative drain on society.


So basically some other platform will take over?

No, break up all the tech giants, and prevent monopolistic platforms from ever arising.

A platform so gigantic that it's impossible for humans to intelligently manage it should simply not exist.


You do realize the network effect right?

You mean that a bunch of your friends is on one platform, and a bunch of others on another platform, and your family on still another one.

But then you have to be on all platforms, so this means everyone else also needs to be on all platforms.

So how exactly do you see this working when reality pushes everyone to use a single platform? because you know, it's a social network.


1) My family is on Facebook. I am not. ;-) It's already the case that Facebook, Instagram, and Twitter for example have different "crowds", though there's some overlap.

2) Somehow humanity survived without these networks pre-2007ish.

3) I have no problem with the existence of many smaller special-interest social networks. Those are manageable, and we had them before the rise of the BigCos. The problem is with the existence of a few all-encompassing, world-consuming, general-purpose social networks. They have proven to be completely unmanageable and toxic.

"Abolish Gmail", for instance, doesn't mean abolish email providers. Not at all. And it doesn't mean that "the next gmail" will replace it. Healthy competition is good, unhealthy monopolies are bad.


Your point 1 contradicts your monopoly theory.

Your point 2 says you really want to join the Amish.

You mean smaller platforms like Parler or 4chan?


> Your point 1 contradicts your monopoly theory.

What is my "monopoly theory"? I clarified what I meant: "A platform so gigantic that it's impossible for humans to intelligently manage it". We don't really have a good term for a corporation that has undue, oversized market power. Pedants like to argue "so-and-so is not literally a monopoly!" but this is just wordplay and doesn't really address the serious social issues involved.

> Your point 2 says you really want to join the Amish.

Maybe!

> You mean smaller platforms like Parler or 4chan?

I mean smaller platforms. It would be ridiculous to suggest that your cherrypicked examples are representative. You can make anything look bad by cherrypicking.


Hmm. Obviously I don't have a perfect solution, but I've often thought that a "tagging" system would work better than a "moderation" system.

It would work like this: by default, nothing is moderated. Instead, every post/tweet/etc. has to be labeled by the user. There can be an auto-suggestion feature to make it a quick process.

Then, other users can choose which tags they want to view or hide. If you don't want to see content tagged with X, no problem. If you do, you can.

In my mind, this would solve a few problems:

1. Justifiably ban people that try to circumvent the tagging system. You aren't determining whether their speech is allowed or not, merely ensuring that they follow the very basic rules of the site. This is much simpler than relying on an AI to interpret a nuanced comment and then moderate it.

2. Allow everyone to say whatever they want. No one will feel silenced.

Again, I'm sure there are holes in this idea, but I think it might be a better approach.


Tumblr's tagging system is quite similar to this - you can "follow" tags or "blacklist" them and most people do tag pretty extensively - and it allows people to get away with some things that would be banned on twitter, like gore blogs. On the other hand, Twitter allows nudity, while Tumblr does not, and they both still ban for "hate speech."

Right, it seems pretty implementable from a technical perspective. The hard part is having a company put I disagree with what you say, but will defend your right to say it in their company philosophy.

It should be criticized, why do you imply that is a negative? AI is just a tool, and it can be used incorrectly. In this case we have a policy that seems to be applied incorrectly. That is a simple problem to solve, require more review, or the ability to escalate. Don't make such decisions "final".

Criticing is easy when you don't have to provide any solutions.

Expecting Twitter not to have their hands full with these kinds of issues is very naive. It's always a question how to spend your very limited resources.


Migrate to a federated network, put up a large barrier for the ability to post and have employees moderate it. Facilitate administrators creating their own user moderated nodes and force users to opt into someone else's node if they want to read or post in them.

document the filtering/ranking widgets and give the users their own knobs to control them.

> Why does it matter if the flagging is automatic? Does that somehow make it more acceptable?

Since the action (for “encouraging or promoting suicide”, bizarrely) was upheld on human review, whether the flagging was automatic or not is obviously irrelevant.

Clearly, this action is unwarranted and the Bishop should be entitled to a full refund of monies paid to Twitter for the privilege of having messages distributed on the platform, at a minimum from the point of the unjustified ban.


It allows those responsible for these events to feign lack of agency. The Third Reich was enabled by convincing people that they had no agency and demanded that they adhere to the will of the bureaucracy - even when that meant genocide. Not saying that Twitter's censorship activities are comparable in terms of outcome, but rather that the same underlying mindset is at play here... of course the big tech censorship is likely to result in much more troubling outcomes in the future.

I think the argument that private business is going to be the death of free speech is a strawman argument of the right. Typically this demographic goes out of its way to ensure a healthy free market with little obstructions.

However they seem to lack the technical experience to launch a (successful and secure) competing social network of their own. So instead of achieving technological parity and becoming valid competition, they instead aim to impose regulations on the entire industry. It's ironic and disappointing.


>Typically this demographic goes out of its way to ensure a healthy free market with little obstructions.

You are kidding right ? Private businesses goes out of their way to obstruct the market. Google being exemplary proof of this.


How does AWS caving to social pressure to fuck over Parler fit your narrative?

If parler knew what the were up against they would have had on prem infra. Parler did not have technological parity with Facebook.

It's embarrassing to say you're going to beat Facebook at their own game and then create a business model that's unsustainable from the very beginning.


Uh, wat? Parler's directly competing with Twitter.

Why are you commenting when you have no idea what you're talking about?


Have you forgotten about Parler?

They obviously did not have technological parity with Facebook, did they?

A couple weeks ago Twitter inexplicably suspended @AppsExposed, an account that has repeatedly exposed scams in Apple's App Store and was recently featured in Forbes. The account is still suspended. https://www.forbes.com/sites/johnkoetsier/2021/02/02/porn-ap...

This has nothing to do with euthanasia, but that's the point. It could be anything, or nothing. Twitter doesn't even have to give a reason for suspending accounts. There is no accountability.

"Just don't use Twitter", some say. But Twitter has 300 million users, and you're supposed to "just" ignore that audience. How about "Just don't sell your product in the United States"? Doesn't sound so great, does it. The excuse is always is Twitter is a "private" company (not really, they're publicly traded). But Twitter is literally the size of the United States, it has approximately the same population. Facebook has a larger population than any country on Earth. These aren't just companies, they're almost nations.


I got blocked from posting on Twitter about 6 months ago. After a couple complaints and a couple weeks, they reinstated. No idea why. I don't even post mild political stuff.

>"Just don't use Twitter", some say.

And, yeah, it's one of the main ways I interact with a lot of people professionally. Losing access would be a major inconvenience and probably even somewhat professionally damaging.


Yes, I actually deleted my Twitter account and joined ADN. I was gone from Twitter for 4 years. But this accomplished nothing, it only hurt me professionally, and ADN eventually shut down.

Individuals can't win in this scenario. Like it or not, the big social networks have captured the masses.


I’ve not used Twitter in years and don’t know what the appeal was when I did.

“Audience” is specious to me considering the number of bots. Still, it’s an echo chamber regardless of which side you’re on.

So, yeah; I think we would all do well to recognize the immense bias of Twitter and ignore them. Soon enough their investors would punish them for these kinds of actions if we did. As it is, their investors love this nonsense because it results in “eyeballs”. Mine are rolling at Twitters antics.


Forbes is a blogging platform nowadays, something being published there should be given about as much weight as a medium article.

> Forbes is a blogging platform nowadays

What isn't? A lot of "hard news" stories nowadays are just republishing tweets. So-and-so famous person tweeted this, so-and-so other famous person tweeted a response, yadda yadda. [James Earl Jones voice] This is CNN.


It's a half-joke in Catholic media circles that making comments like this is likely to receive some kind of punishment on social media... I say "half-joke" because it happens extremely frequently and those writers in that world all know it happens and try to laugh about it.

As someone in the tech world, I spend most of my time interacting with more "liberal" people. I don't hear about this happening with any frequency, accidentally or otherwise, to those on the left. At the same time "diversity of thought" is something I only read about in right-leaning circles. Events like the one shown in this article are why diversity of thought matters. This is clearly an important debate, and Twitter, willfully or systematically, is shutting down a legitimate, notable voice in the debate, and Twitter is ill-equipped to even understand that the point made by the bishop is reasonable (if not universally shared).


When Dorsey was in front of a Senate committee just before election, the discrepancy in the types of questions between Democrats and Republicans was quite stark. Republicans were hammering Dorsey on censoring conservatives, and Democrats were hammering Dorsey for not censoring enough. Tells you all you need to know.

It tells you that politicians are only ever in the game to win elections, and, in that moment, just before the election, this was the way to approach the hearing that seemed most tactically appropriate to both sides.

That's not actually something I want to know; we've already known that that's how politicians behave for centuries. I'd be a lot more interested to know if social media is influencing the public political discussion in meaningful ways, and how, and to what end. But those questions were never going to be answered in any sort of compelling way by a gaggle of politicians trying to score cheap points in front of their constituents.


It's not left/right any more, it's "people who want to control others" vs. "people who want to be left the fuck alone".

So, abortions for anyone who wants one, right? Birth control for anyone who needs it? Trans people allowed to participate in sports?

Stop pretending the right has no interest in controlling people's lives.


I can’t imagine the reasoning one must go through to want another person to suffer as much as possible at the end of their life.

My Catholic dogma may be out of date, but I'd suppose it goes as such: Euthanasia is de-facto suicide, and suicide is a mortal sin. Such sins may damn a soul to eternal punishment in Hell. Suffering at such intensity for eternity is much worse than the suffering at the end of one's life. QED, euthanasia is bad.

I'm certain that I'm butchering this. Also, to be clear, I am NOT advocating one way or the other on this reasoning, just giving what I can remember about the reasoning. Again, not my reasoning or advocation.

If anyone with a better understanding of the reasoning would comment, that would be great! Thanks.


As always, the CCC is at your disposal:

https://www.vatican.va/archive/ccc_css/archive/catechism/p3s... (yuck, needs a UI makeover)

The crux of it is that suicide rejects the gift of life

"Everyone is responsible for his life before God who has given it to him. It is God who remains the sovereign Master of life. We are obliged to accept life gratefully and preserve it for his honor and the salvation of our souls. We are stewards, not owners, of the life God has entrusted to us. It is not ours to dispose of."


That’s great, and no one is telling Catholics they can’t suffer if they want to.

The problem is Catholics are telling others to suffer. Ironically, in the US, it’s under the banner of the “small government” and “freedom” party.


Debates are all fine.

What has changed is Debates + Algo amplification of one side or the other - using pseudo signals like the Like Count or Follower Count.

That changes the story. Its not a debate anymore. Its a mindless game of count accumulation. Given enough time and energy you can find enough misguided people in the world to validate whatever you believe.


>As someone in the tech world, I spend most of my time interacting with more "liberal" people. I don't hear about this happening with any frequency, accidentally or otherwise, to those on the left

I mean it's kind of tautological to say that you don't hear a lot about censorship of left-wing opinion if you're situated in a left-wing social environment. For a long time I worked in a very conservative community as a very left-wing person and if I had spoken my mind I could have probably packed my bags. Try having an outspoken atheist debate in a very culturally conservative community. I went to a catholic private school as a kid and if I had actually said what I thought about religion I probably would have gotten hit with a ruler or something.

Which obviously isn't to say that you're not right in principle, obviously open debate is good and all, but what you're describing isn't just occurring in 'liberal circles'. People always love to promote diversity of thought when they happen to be in a minority position.


I think being allowed to die on your own terms is a right that every human should have. And I think assisted suicide is a good thing.

But I don’t understand why Twitter would block the Bishop for saying what he said. And the reason they gave, that it “promotes self harm”.. that’s straight up Kafkaesque.


A friend of mine got a warning from Twitter for promoting self-harming last week after he gave someone a link to some documents and said "There you go. Knock yourself out."

I got a similar one for just replying in a joking manner about cutting off a limb (in context this was very clearly a joke). I appealed it, but nope, denied.

Yeah. Twitter's enforcement systems have a) an inability to handle clear sarcasm and b) a corresponding inability to handle coded language that's a clear violation of TOS to a human reviewer.

To be fair, humans (at least those commenting on HN) appear to be terrible at handling sarcasm. Almost every sarcastic comment I’ve written has gotten an angry reply by a literalist.

Assume you are right, the way it is being implemented is by threatening doctors to pull the trigger. Humans are too stupid to implement this without horrifying consequences.

In the Netherlands a ‘hilarious’ scene occurred when a doctor showed up and killed a patient who was screaming not wanting to die in front of their family. Apparently the patient was not in a mental state to decide not to die after previously being in the state wanting to. Ha


[flagged]


I was interested as well so went looking. He is probably referencing this. It was the first article I could find on it. You have to read below the fold to get a decent sense of what happened. A key paragraph is included below. https://www.bbc.com/news/world-europe-49660525

"After being diagnosed with Alzheimer's four years before she died, the patient wrote a statement saying that she wanted to be euthanized before entering a care home - but that she wanted to decide when the time was right.

Before she was taken into care, a doctor decided that assisted suicide should be administered based on her prior statement. This was confirmed by two separate doctors independently and a date was set.

When the day came to end the woman's life, a sedative was put in her coffee and she lost consciousness.

But the woman then woke up and had to be held down by her daughter and husband while the process was finished."


I spent about 10 seconds googling for you: https://www.bbc.com/news/world-europe-49660525

The simplest answer is that a bot at twitter initiated the ban, the the low-paid worker who managed the appeal didn't have enough of a grasp of academic English and/or catholic theology to understand that the tweet in question was actually against assisted suicide.

That's plausible, but he was also turned down on appeal, which was probably by a different pair of eyes.

Yeah: the first set of eyes. Before then, it's pure automation (afaik).

Good point, the appeal may have been the first time a human reviewed the case, I was assuming otherwise.

Of course it was against euthanasia, why would anyone get banned for being pro euthanasia?

> why would anyone get banned for being pro euthanasia?

Because it's explicitly against the rules.

"You may not promote or encourage suicide or self-harm." https://help.twitter.com/en/rules-and-policies/glorifying-se...


Euthanasia is an official, assisted(!) suicide. We could technically label it as a plain suicide, but that's not what it is.

It's an official procedure, assisted by doctors, and not allowed spontaneously for everyone.

That would be like saying a limb amputation is illegal to talk about on Twitter, because amputating a limb equals to self-harm.


I am in agreement that there's a distinction. (I am also an advocate of legalizing euthanasia in a variety of scenarios.)

In my experience, social media moderation doesn't make much room for this sort of nuance.


I agree with everything you've said, but the policy for this needs to be thought out very carefully.

Because of suicide contagion, it's not exactly analogous to discussion of limb amputations. Even though discussion of assisted dying/euthanasia isn't the same thing as discussion of specific suicides (as per suicide contagion), the policy still needs to be well considered.


Euthanasia is killing somebody out of "mercy." It does not require the agreement or desire of the person being killed.

It is definitely a step beyond physician-assisted suicide, and the two should not be confused or used interchangeably, though both are morally repugnant.


Either your country is handling assisted suicides in a terrible way, or you don't really know what you're talking about, as rude as that makes me sound.

Especially "It does not require the agreement or desire of the person being killed" is absolutely, 100% false.


I'm not talking about laws. I'm talking about the meaning of words. What is it called if a doctor kills somebody out of "mercy," but without explicit consent? An increasing morphine drip that suppresses vital function allowing them to "go peacefully in their sleep" just a bit sooner than would have happened otherwise. This happens all the time.

That's called euthanasia, not physician-assisted suicide. And if we can't acknowledge the difference, we are opening ourselves up to allowing what I described because we already allowed "euthanasia," when in fact what we really allowed was PA-suicide. The fact that both are murder in a moral sense doesn't change the necessity of distinguishing between them.


> What is it called if a doctor kills somebody out of "mercy," but without explicit consent?

That's murder.


Euthanasia is not self-harm, quite the contrary.

It is suicide, though, and therefore seems to be against the rules as they're stated

It could be argued that performing a surgical procedure is harm - after all it destroys various tissues using a knife. But there’s a difference, and it’s the same difference as between euthanasia and suicide.

It could be argued that Twitter put "self" in front of "harm" for a reason.

"or"

I feel like the words of the bishop could be perceive as encouraging suicide when framed in that context. As it was framed by the algorithm.

The human reviewers just took that same context not knowing the politics of assisted suicide.


> And even if euthanasia is just that – and Twitter is very proud to toot its horn as the sensitive and sensible, woke platform that clamps down on promotion and encouragement of self-harm, including its ultimate form, suicide – the tweet, even though it opposed it, got banned.

This is such an interesting topic, because while euthanasia can be considered a way of suicide, advocating against it can also be considered advocating for painful suicide (since some patients will still suicide, just in a much worse way).

Note that the bar for euthanasia is incredibly difficult - and it should be - in most countries that allow it. A draft to make it legal was approved recently in my home country, Spain, and to apply it you need to do:

Prequisite: have a severe incurable illness.

1. Day 0. First written application.

2. Day 2. Doctor discusses with patient the diagnosis, treatments and their results, and other kind of alternatives.

3. Day 15. Second written application.

4. Day 17. Same as 2, Doctor discusses with patient the diagnosis, treatments and their results, and other kind of alternatives.

5. Day 17. Ask if the patient wants to follow up.

6. Day 27. Doctor consults with a different kind of doctor to approve situation.

7. Day 30. The president of "guarantee and evaluation" of the "state" has to be made aware of this.

8. Day 32. The president designates a doctor and lawyer to verify everything is okay.

9. Day 39. These new doctor + layer present their report.

10. Day 39. The patient signs and chooses modality of death (they can self-administer the substance or have it administer by a nurse).

Note: days here point some times the shortest possible dates, e.g. Day 15 is actually "from 15 days since Day 0" and some times to the longest available "up to 2 days since Day 0".


I had a long and interesting conversation with my father about this. He's retired now, but was a family lawyer (solicitor in the UK). He is vehemently opposed to euthanasia purely on the grounds of his experiences with ailing/aging parents and greedy children - he's had clients tidy up their wills, often with their children looking over their shoulders, and then kill themselves to stop "being a burden" on their families. He thinks if euthanasia is legal then this will become more common, no matter the procedural hurdles put in place ("come on Mum, we better get these forms signed before you get more senile and can't sign them. You don't want to be a burden on all of us, do you?").

I'm very much pro-euthanasia, because I don't plan on having any kids and consider the worst of all possible deaths to be gradually losing my mind in assisted housing (well, being eaten by a wild animal holds more terror for me, but the other one is close). Obviously I don't know how enthusiastic I'll be when the time actually comes, but I'm in my 50's and the thought of slipping away painlessly when I think the time is right holds no fear for me at the moment.

Religion also plays a part - I'm atheist and so not afraid of losing my place in valhalla because of suicide. Dad's an intellectual Anglican and so suicide is morally less acceptable for him. But this didn't come into the discussion much - he genuinely fears for the consequences if greedy children are allowed to persuade their parents to kill themselves.


This might not advance the state of the discussion any further, but have you and/or your father had the chance to see the film "Knives Out"? It's a mystery/dark comedy, and explores somewhat-related ideas in a good-natured way.

no, I'll check it out, thanks :)

Hm. How does this work? Isn't the lawyers (and/or notaries) supposed to establish that the person making the will understands its content? How does suicide/euthanasia factor into this at all? If the aging/ailing person feels pressured by their descendants, how does living longer would help them? If they are capable of making/expressing their will aren't they capable of not taking euthanasia? On the other hand if they are incapable of signing a will they are also incapable of requesting euthanasia, right?

And even if we simply leave the will part out of this, if they are pressured by "loved ones" to end it with dignity, and they agree, who are we to disagree? If they don't agree then it's a crime to harass someone to death (whatever the method).

What am I missing? Could you explain your father's argument? Thanks!


> if they are pressured by "loved ones" to end it with dignity, and they agree, who are we to disagree?

Well, my Dad for one ;) Family dynamics can get very ugly. If a family is emotionally manipulating any of its members into committing suicide, he feels that is wrong. I tend to agree, this feels wrong to me, even if everyone involved is saying that they're happy with it.

At the moment, all of this behaviour is legal, right up until the parent commits suicide. There's nothing the lawyer can do because there's no expressed intention to commit suicide (just the repeated "I don't want to be a burden" I guess - I'm never witnessed this myself). I think we could incorporate language into a Euthanasia bill to cope with this situation better than it is currently, but Dad disagrees, citing his experience of how manipulative families can get.

To summarise his view: families can be evil to each other. Some families already manipulate aging parents into suicide, which is wrong and should be opposed. Making assisted suicide legal and giving it a framework will encourage it, and any procedural hurdles won't stop that.


> Family dynamics can get very ugly.

Yeah, no questions about that.

Ah, okay, thanks for the explanation. Yeah, I understand this argument. Emotional abuse, exploitation of vulnerable groups (from elderly, minors, to folks with disabilities, homeless, unemployed, persecuted minorities, etc. etc, alas the list goes on for long) is already a problem, and in many jurisdictions it's already pushed back against.

I think protective care would help more if it were independent of euthanasia, because it would mean that if the manipulated relatives find out they have been manipulated and then change their minds about euthanasia then they don't have to go back to their manipulative relatives. (Duh, I know, but this catch-22 problem is very endemic in a lot of public "safety net" setups.)

> Making assisted suicide legal and giving it a framework will encourage it, and any procedural hurdles won't stop that.

It meaning manipulating others into it? Yes, that's probably tautologically true. But, that's the plan, to make it easier anyway, as it should decrease suicides, and even more importantly it should decrease time spent in misery and suffering. (And as I mentioned pushing back against exploitation should be a priority anyway.)


The offending tweet was arguing against all forms of suicide, assisted or otherwise. I'm trying to understand your point charitably, but it escapes me how arguing against suicide could be interpreted as arguing for suicide. I understand that there's a consequentialist argument ("it is going to happen anyway"), but that seems to accept something as inevitable which is not.

Going against legal abortion yields more illegal abortions.

Going against legal alcohol yields more illegal alcohol.

Going against legal X yields more illegal X.

Same concept, going against legal suicide (euthanasia) yields more suicides. Sure overall the number of suicides might go down, maybe even very noticeable, but the quality of care would suffer tremendously. I'm not arguing on suicide vs suicide, but on the quality of it.

I believe it's better if N people decide to end their lifes consciously, surrounded by loved ones in a medical facility painlessly than if M people commit suicide alone and ashamed at home.


There are so many things wrong with this reasoning, it's kind of hard to know where to start:

1. Take 'Going against legal X yields more illegal X': This is a meaningless statement. For example, going against legal murder yields more illegal murder. It is a non-statement, tautological.

2. Take a revised and more debatable statement: 'Going against legal X yields more X' (much rarer, but possible, for example cocaine proliferation brought about by prohibiting less damaging drugs). This deserves careful consideration. Do our laws assume from the outset the required enforcement to enable their utility? In some ways yes, in other ways no (for example, our laws shouldn't need to take into account politically motivated lack of enforcement).

3. Even if we say that a particular law leads to _more_ of the exact averse outcome and the legislature should have known the challenges of enforcement - illegal drugs being a good example of this, or illegal immigration, it _still_ doesn't follow in all cases that the answer is to decriminalize. A debate must be had, for sure - and some change is required, but that change may be subtle, enabling better enforcement for example.

However, your above logic as it stands is essentially an argument for anarchism, though I'm not sure you intended it that way.


Probably the strongest form of the "going against legal X leads to illegal X" is that "if society doesn't address the causes that make people want X and just makes it a criminal offence, we'll still see a lot of X, plus we'll have a lot of folks in prison".

For example with murder we have many cases of vigilantism, the cause was a broken police/justice system.

Every kind of substance crime? War on Substances! And it's a total failure.

"Tough on X" ("enforcement") sounds good, but our whole history is about how tough never really works. (Law and order, machismo, denying and ignoring real causes, and putting on a big show about the effect, those are the magic ingredients to totalitarianism.)


Yes, it is in a vacuum. But in the discussed case, if you penalize posting only about the ilegal version, then an assymetry is created where going against X (and thus inciting illegal X) gets filtered through a censor lens where you only see "inciting illegal X".

Anyway I'm not saying I agree, just that this could be the reasoning behind saying that the tweet promoted harmful suicides.


Going against legal guns yields more illegal guns.

That argument is rationalization, and not material, regardless of which subject you substitute for X.


I think because euthanasia is someone else kills you, not assists with suicide, OP used euthanasia, the Irish Bishop's specific tweet used 'assisted suicide' and the story headline used 'euthanasia'

I have no idea if the church considers euthanasia, suicide or murder or both. A lot of people would not consider euthanasia suicide since someone else is agreeing you can die and doing it for you. If you can't legally get euthanasia you might have to suicide early before incapacitation.


In Belgium we were one of the earlies to approve it and I think it depends on ones condition (If you're guaranteed dead due to the likes of cancer it changes i think) but I think generally it needs to be judged by a panel of doctors and the person needs to have some consults with a psychiatrist and all in all there needs to be no alternatives for a good life. You won't get approved if you're just depressed.

In The Netherlands there's the possibility for severe cases of depression, too. "Uitzichtloos lijden" (irremediable suffering ) is the magic phrase, and there is the fear that it could lead to older people deciding to euthanize when living conditions in homes for the elderly would be worsening. Not that that's bound to happen soon, but imagine not being able to tweet about it...

I think "Uitzichtloos lijden" is a term more emotionally powerful than "irremediable suffering" makes it seem. Maybe not legally, but in terms of what it conveys.

"Uitzicht" is your view on the future. "Uitzichtloos" means your future is empty, have nothing to look forward to. As an adverb for suffering ("lijden") it conveys nothing but endless suffering, without future, without hope.


Could maybe be translated to "Hopeless Suffering", although that doesn't sound as good as the dutch/german version.

> advocating against it can also be considered advocating for suicide

In the same way that advocating against guns can be considered advocating for switchblades.


Advocating for euthanasia is advocating for the government to force doctors to take someone’s life even against the doctor’s will. It’s an interesting form of violence.

Can you give an example of a location where euthanasia is mandatory for doctors to perform, if the doctor doesn't want to perform that service?

No it isn't. It's advocating for removing penalities for ending someone's life in certain specific circumstances. Whether doctors make that choice will be up to them.

It’s both.

This is a mistake. Of course it is a mistake. But we should understand that mistakes happen disproportionally to views Twitter consider acceptable but only barely so, and that mistakes are not harmless. People are risk averse, and this will many other people will seal their lips. People with their whole life on twitter, who cant afford to take risks. And lets not forget the trouble for this particular bishop.

Although I support all the progressive ides I will never support banning (or shit-storming) people for having and expressing the opposite (or any) opinion (even if I really feel sure they are wrong). Today social network companies enjoy and exercise too much power. And this might be less of a problem if they didn't employ outdated algorithms to make ultimate decisions too quickly.

I've been thinking about euthanasia lately. Last night, my Aunt passed away.

She has been suffering from early-onset Alzhemier's disease. A little over two weeks ago, she was admitted to the hospital because she lost the ability to swallow. Because of her deteriorating condition, they decided to send her to hospice care without life support. For two weeks, she had no food or water, save for the tiny amount in the morphine and atropine syringes they have her. No IV drip. Neither the medical staff nor my family expected she would survive that long. Two weeks of death by starvation and dehydration was horrible to witness.

Almost a decade ago, suicide impacted my family, so I had a lot of reservations about self-imposed euthanasia. Now, after seeing what it was like for my aunt, I'm not so sure anymore. We tried to visit her every day and play her the music that she wrote and loved (she was a musician). They say hearing is the last sense to go, so hopefully maybe that meant something to her, and she did respond after the first few days, but for two whole weeks... It seems excessive, and I wouldn't blame someone for pre-empting that.


My money is on Twitter’s $0.50/hr offshore first tier support staff not being universally good at parsing the meaning of complex paragraphs of text.

I bet you could find plenty of educated native English speakers who would accidentally flip the meaning of that tweet around if forced to skim it and 200 other “probably bad” ones every hour, every day, all week.

So there’s probably no real news here except that big tech companies don’t do support tasks well.


I live in Europe, in a country where black population is statistically insignificant, I only see black people about once every fortnight, and I live in a city of close to 2 million people.

A lot of my acquaintances and friends have practically zero exposure to what caused the BLM movement and during last summer there was a significant number of people posting stuff along the lines of "all lives matter".

To illustrate the absurd effect when "the words seemingly mean a good thing, but their actual meaning is flawed" I posted "All lives matter, arbeit macht frei." on Facebook. I was suspended for 24 hours and my appeal was rejected as well.

Ever since then, I wondered a few times if actual people could have interpreted it as offensive.

And I have since then always arrived to the conclusion that Facebook cannot interpret irony, satire, sarcasm and reflection.


> And I have since then always arrived to the conclusion that Facebook cannot interpret irony, satire, sarcasm and reflection.

Sarcasm has to be the least effective possible way to communicate anything, anywhere. You are absolutely begging to be misinterpreted or misunderstood, either legitimately or even deliberately. And you can't really defend yourself because 'it was sarcasm' won't cut it as a defence with many people - it sounds like 'it was a prank bro'.

Why would anyone choose to communicate about a complex issue this way?

> I posted "All lives matter, arbeit macht frei."

This seems positively suicidal - I can't imagine what good you thought could come of this!


Do you think essays like "A Modest Proposal" are likely to be misunderstood?

I've literally seen people use the term 'modest proposal' in a non-satirical way missing the point. Also see how Jean Carroll has been described as a misandrist when she invoked the same phrase.

And also see the difficulty that the 'abolish the police' movement have gotten into explaining that they don't literally mean abolish, especially since some of them do literally mean that, so you get really stuck trying to explain you were trying to make a point, but yes those other people other there who used exactly the same words as you did mean it literally but that wasn't quite what you meant you were using it for terseness... etc. Why give yourself this problem?

Seems a really bad tool to try to use for anything. Be straightforward with your communication. Don't give people an opening to attack you for no other reason than trying to be whimsical in your writing.


... some people do really support abolishing the police. And those people are also likely to go and protest. Sure, the bigger moderate masses picked up the slogan, but it started with that radical message.

Plus on a realpolitik level it doesn't really matter what the slogan is. Really. See how tha ACA was turned into Obamacare and death panels. Yes, independents initially might be confused, yes a dumb slogan doesn't help, but fundamentally if the political and social will is there, the slogan does not really matter.


And some people called Noam Chomsky a holocaust denier because some nutjob put an essay of Chomsky's in his book: https://en.wikipedia.org/wiki/Faurisson_affair

Just because a few people are too dense to understand context doesn't mean we should restructure what we allow in society to accomodate them.


Comedians do shows, they have a persona, an act, and they do their bits embedded into a context. If someone just randomly shouts Nazi ideology into the void, especially at a time when that void is very ticklish, it's not unheard of that the void silences the shouter.

(Is this good? No of course not, it's a very sad state of affairs that somehow constructive de-escalatory discourse is not incentivized on these platforms, and preaching to the choir, virtue signalling, trolling and so and so are.)


> Why would anyone choose to communicate about a complex issue this way?

Same reason other people make bad communications. They assume everybody else thinks the same as they think, and know what they know, and know nothing else and think nothing else.


Well I will say if we were friends on Facebook and I saw that post come up in my feed, I would have just removed you as a friend. Even with context, I don't see how that can be interpreted as anything but shitty.

I think the shitty thing is how these two slogans were used for dehumanising other people by the same kind of people.

And there was context - a series of long form posts that I have published over the years standing up for all kinds of minorities, from gay people to refugees being dehumanised actively, on huge billboards, by the government.

But you do you.

I'm still confident that the core of the message was on point, even though I accept that the form was chosen in a moment of anger - I saw a locally acclaimed artist post a very well designed "all lives matter" poster and was disgusted by how everyone was cheering on how positive their message was and I couldn't help but imagine how someone applauded the typographer who created the slogan at the gates of Auschwitz.


Simimilar European background, and I would have interpreted this in the exact way you intended.

I reckon it would have needed more literally benign slogans, or have the idea further developed.


You wonder if your comparison of people's argument to the slogan that sat above the gates of Auschwitz could have been interpreted as offensive?

I'm sure you had some internal logic as to how you got from some modern slogan to the holocaust, but without actually explaining your thought process how exactly is anyone else going to understand that logic?


I think it is not appropriate to compare the Holocaust with slavery and disenfranchisement of blacks in the US. The Holocaust was a event spanning under a decade that must not be forgotten. Slavery was reduced in the US but not banned so it is a problem that has been spanning 400 years - and still happening. They are both wicked and evil issues but entirely different in scope and effect.

I'm not sure if it would be offensive to make a comparison, but equating bad thing with bad thing without nuance doesn't make you look clever.


I was not equating the two things.

I was illustrating how people come up with slogans that sound good for things that destroy lives.


> people come up with slogans that sound good for things that destroy lives

Did you consider writing just this instead? Why use sarcasm for something that can be said plainly and straightforwardly instead?


No, I did not, at the moment. I trusted that my friends who know my track record regarding social issues will know that I'm not for either of the two things referenced via these slogans, quite the contrary.

Contrast can be a tool, and I could have written a long and boring essay on how sad and disgusting these phenomena are or just put them up against one another.

Do you think something like this would have worked better? I'm quite the RATM fan.

Some of those who said Arbeit macht frei Are the same who say All lives matter

This is is what I meant.


So you posted an Alt-Right and a Nazi slogan on Facebook without context and wonder why Facebook didn't like it? How should anyone know that you were being sarcastic?

I would understand your confusion if you had made a long joke or written a sarcastic story, but posting Nazi slogans without changing or doing anything to them is not really humorous (to me).

To the public, there's no difference between you posting these slogans, and an actual Neo-Nazi doing the same thing.

Maybe I just don't understand the point you're trying to make.

I do understand what you mean by people posting slogans they don't understand. In that case I'd still put the blame on the people. If people just happily post slogans without researching what they mean, then that's not BLMs or Facebooks fault.

Both situations are a bit unfortunate, but I think we have talked enough about what "Black Lives Matter" means, to the point where it's a quick google search away.


I did not mean it is humorous - I meant it as an illustration of the horrible context of slogans that are word-by-word positive: "work makes you free".

I found that people around me did not understand the BLM context, because there is no local context. "Roma lives matter" could be a local context, because there's a lot of discrimination against roma people.

I was not making a joke, I was angry at how commonplace the hatred was from people who might not have seen a single black person apart from the cinema screen.

The police brutality context is also kind of lost here, because while there are a bunch of dirty cops, physical brutality from them is practically unheard of.

The holocaust, on the other hand is a very real thing. People were deported to Auschwitz and other camps from the very _street_ I live in.

I just hoped people would realise how that certain good-sounding slogan is not much different from the contemporary good-sounding slogan.

Again, maybe I should have added an explanation, but I felt like it took away from contrast and I was very fed up with the amount of backlash towards what people overseas were standing up for.

Had conversations with friends about the (lack of) contrast between the two slogans and basically everyone understood the point. Facebook did not, but as I said, I'm aware that this was an edgy form for my point.

Regarding whether there's a difference between who is saying what: well of course there is, the context of this post was my posting history standing up for various sidelined groups and minorities and the audience of this post were my friends who very well know that I am as far from using either of the slogans in an agreeing manner as is Trump from a PhD in psychology.


This is a non-story. Someone on twitter wrote:

> There is dignity in dying. As a priest, I am privileged to witness it often. Assisted suicide, where it is practices, it not an expression of freedom or dignity, but of the failure of a society to accompany people on their "way of the cross"

Twitter then flagged it for "promoting or encouraging suicide or self-harm". Obviously it is not, but this is a rather complicated sentence and I can imagine a false positive from AI. It sounds like it was appealed and then a human upheld the flag, but hey, I can't imagine that the person who upheld this honestly understood the sentence either.

This is not "wokeness" or "cancel culture" as the post here describes... just bad moderation. They need to hire @dang.


Systematic bad moderation can be tantamount to wokeness / cancel culture, especially if its manifestation in practice exhibits biases in the expected fashion.

This is actually a major problem with arguments that try to dismiss the need to have ideological diversity in these institutions - cancel culture can arise from genuine good faith moderation if those moderators also happen to just have extremely biased ideological priors, eg. "we need to fight back" is taken as a literal threat of violence when said by the outgroup but not by the ingroup.

In this case, I think we need to be vigilant against NLP models meant to flag content that may not explicitly build in ideological biases but incorporate feedback loops that will reinforce them - eg. they're meant to detect posts that will likely be flagged by users / human moderators, but the baseline flag rates are ideologically-biased.

Unsurprisingly, the woke are perfectly able to understand these concerns when it comes to concerns about say AI coming to exhibit latent racial biases. But when the same mechanisms may cause ideological biases there's a telling lack of concern.


> Systematic bad moderation can be tantamount to wokeness / cancel culture

Maybe, but this is a bad example of it. Flagging posts that encourage suicide isn't a left vs. right issue... it seems pretty bipartisan to me. The fact that the AI made a mistake on a single example here isn't even indicative of bad AI... this could happen (and did happen in this case on appeal) with human moderators too.


Yeah, I do agree that in this instance you're probably correct. That said, it also isn't clear to me what in the tweet is easily mistaken for advocating self-harm - obviously keywords like "dying", "assisted suicide" and "failure" are probably playing a part but are not jointly sufficient. It could be that there's additional contextual information that could be seen as ideologically-valenced and that contributed to this moderation action, but we'll never really know.

For example, maybe the tweet caused a lot of harsh backlash for ideological reasons and that makes it more likely for Twitter to action a post for any reason, and the model is just making a softmax prediction of what that reason is. That's something that we should find discomforting.


Except the article also indicates he appealed and the appeal was denied.

Exactly, that seems to be the obvious explanation to me as well.

I don't know what this website is, but it looks like they really want me to subscribe, and I guess they need some content to make me think there is a problem they are solving :)


Never forget who is advocating for more censorship in the west.

Has to be an overzealous AI surely, or is religion the one thing they /really/ don't want arguments to start on - maybe safer to just nip any flamewars in the bud?

the screenshot shows an appeal took place and their team looked at it and upheld it.

on edit: basically whoever read it probably did not have the English language skills to realize it was against Euthanasia, as the phrasing was more complicated than needed to make that point.


If your job is to read English text, and decide if it's semantics match banned semantics; Then maybe a good grasp of English is a job requirement - I'd say ignorance is no excuse (for the hiring employer).

Good native English speakers are pretty expensive.

Even with English language skills, maybe just scanning the text and not fully comprehending?

Slightly off-topic: I wonder how impartial a human has to be - would the outcome be different depending if the reviewer was a devoted catholic or an atheist?


when I say English language skills I don't necessarily mean they don't read English, but just that they don't read it well enough to have understood the distinction under whatever constraints they were working - if those constraints include scanning quickly then they did not have the skills to read that text quickly and understand it.

True enough, but there really are at least two alternatives here that could be distinguished, except that we do not have sufficient information to make this distinction. In either of these two cases, the reviewer presumably did not understand the meaning of the statement, but in the first case this was because they did not know the language sufficiently well, while in the second, they lacked the critical reasoning skills to correctly discern its meaning. In the first case, but not the second, the reviewer's performance on this particular task would vary depending on the language it is expressed in.

Of course, in neither case should the person be given the job of reviewing English-language tweets.

There are additional possibilties, such as Twitter having created an environment in which it is much safer or easier to agree with the initial ruling, or where the reviewer is required to pick from a list of justifications for reversing the initial decisions, and none of the options fit (I'm sure you have seen questionnaires like that.)

The one thing we can be sure of is that the Bishop should not have been banned.


There is a possible 3rd option: they didn't have enough time to make a reasoned judgement.

It's quite conceivable these human moderators are expected to get through <X> tasks per <time period>, and perhaps the values you plug in there are such that it only leaves a very short period of time for each one.


Kevin Doran has some serious hang ups with gay people.

Surprised he was never banned on that account by Twitter. He's quite the controversial Catholic figure even in Ireland.


Why are people surprised or annoyed at catholics being catholics tho, he is controversial among pozzed circles but he seems to be a Catholic being a catholic and that is it.

Exactly. "Priest teaches Church teaching", woooah, what a radical.

I think secular society (i.e. Big Tech, mainstream media, governments) is treading a very interesting line at the moment, because it simultaneously is ( or at least appears to be) embracing all faiths and cultures, while denouncing them as being inherently hateful (Christianity, Islam).

The idea that they will ever reform and completely 180 on teachings they've stood by for thousands of years is naïve. And if we stand by the idea that we simply cannot tolerate hateful speech, with the definition of hateful becoming broader every day, where will we end up?

The next 10 years will be very interesting


Why are people surprised or annoyed at catholics being catholics tho

This made me chuckle as I’ve had similar thoughts. It’s like being surprised when a Mormon complains about drinking coffee.

“But yeah, he’s a Mormon. That’s what they do?”


That could be the source of algorithmic or human bias that played into the mistaken ban in this case.

Or to theorize intention, maybe someone was looking for an excuse to ban him.


How anybody could treat Twitter as anything but a read-only medium is beyond me. Twitter technically has (even if programmed) editors and what you get to see is what editors did not whitelist, but what they did not blacklist. The latter is only an accommodation of its size. If they could, they would choose a whitelist model.

Ironically you might be flagged by some for using the terms "blacklist" & "whitelist".

Keep up the good work Jack. This is a good direction, keep pissing everyone off until the EU finally steps in and starts regulating Twitter. Would love to see them have a taste of their own medicine.

Although I would prefer even more if Twitter went offline just like Parler did. I consider Twitter a net negative on the world honestly.


... Because the bot thought he was encouraging self harm. And he's back on the platform and tweeting away happily. Including criticising assisted suicide and not being kicked.

I'm convinced that big tech is becoming increasingly evil. I'll be encouraging my legislators to keep the net free (with accompanying individual freedoms).

As an Irish person who does not agree with the views of the Catholic Church, I think this is an incredibly stupid thing for Twitter to do. The post very clearly is not advocating self harm, and censoring these viewpoints serves only to galvanise support for religious conservatives, as well as swaying people on the fence over to their side.

Content moderation at scale is hard.

Which is why decentralization is the future. Consider Mastodon instead of Twitter.

I don't think this is "cancel culture" or being "woke" like the article implies. This is just an AI or inattentive human reviewer seeing the phrase "There is dignity in dying" in the same Tweet as the word "suicide" and misinterpreting it.

This just seems like a non-story that will get resolved once the right person at Twitter sees it, that is being pushed by a biased website in their fight against what they perceive as "cancel culture".


> resolved once the right person at Twitter sees it that is being pushed by a biased website

Which only validates the existence of that "biased website" if Twitter and other giants in the industry decide to ignore such issues otherwise. Except for Google, which just won't care at all.


This is not the only site reporting it, for example there is this article in a more mainstream site: https://extra.ie/2021/02/21/news/irish-news/irish-bishop-twi...

That one also seems to clarify that he didn't actually get "locked out", the Tweet just got removed:

> However, Bishop Doran’s profile remains active, although the tweet in question has been removed.


On the one hand you may be right, on the other hand you're giving Twitter so much deniability that you'd never get enough evidence to convince you that you're wrong.

If they deliberately made plans to block statements against issues like these then there would be proof. There would be documents. There would be people who could become whistleblowers.

If you can shown me one of those I might be convinced that there's something going. But a single anecdote like this that can easily be explained in a less conspiratorial way is not particularly strong evidence.


> there would be proof. There would be documents.

Would there? Or would such blocks simply be the result of strong, shared political leanings at Twitter [1], without any need for central organization?

In age when so much is being attributed to unconscious bias and systemic effects, this seems like a strange spot to draw the line that now we need evidence of deliberate organization.

[1] Twitter is so liberal that its conservative employees ‘don’t feel safe to express their opinions,’ says CEO Jack Dorsey - https://www.vox.com/2018/9/14/17857622/twitter-liberal-emplo...


I get what you're saying, and if this was a conservative politician getting banned for saying "I think we should lower taxes and reduce government spending" (or whatever) then I would be much more concerned. But in this case there is a much simpler alternative explanation so in good old Occam's razor fashion I'm going with that one.

Here’s what I am told to believe by everyone at these big tech companies:

> bias is inescapable, you can’t stop doing it even if you don’t want to be biased, it’s impossible not to have an implicit preference for your race or identify group over others, so it’s critical that you try to mitigate these biases by making sure you’re part of a diverse group

> the fact that almost every employee of a big tech company is on one side of an intensely polarized political divide has no impact on the decisions the company makes.

Ok.


Yes and no, but likely mostly no.

Keep in mind, the "intelligence" of those AI algorithms comes from datasets of previous behaviors. Given the popularity (?) of the cancel mindset on this particular platform it's reasonable to expect a bias in said data; and in turn the algorithm.

Rinse and repeat.


So you actually believe that Twitter's AI is smart enough to detect that this statement is critical of assisted suicide, and that the data it has been trained on was so heavily biased in favor of assisted suicide that it ends up classifying this as "undesirable content"? That sounds incredibly unlikely to me based on what I know about these kinds of algorithms.

Well, if you take facebook's bart-large, and ask it to label the tweet on categories such as conservative, progressive, etc., you get these scores:

    critical 0.887
    conservative 0.387
    radical 0.325
    progressive 0.003
If it can do that, it can generated a "biased" evaluation.

Smart? That's the wrong methphor.

That said, yes apparently so. Else it wouldn't have "canceled" this idea.




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: