The popularity of this view makes me wish more people read Kafka. A future tyranny might end up not being Orwell's 1984 or Huxley's Brave New World, but a kafkesque nightmare where people are lost in a world of AI giving out absurd punishments.  Kafka's book The Castle  is essentially about this, although I think his aphorisms and short stories are much better than his novels.
Quoting from Wikipedia:
> The villagers hold the officials and the castle in high regard, even though they do not appear to know what the officials do. The actions of the officials are never explained. The villagers provide assumptions and justification for the officials' actions through lengthy monologues. Everyone appears to have an explanation for the officials' actions, but they often contradict themselves and there is no attempt to hide the ambiguity. Instead, villagers praise it as another action or feature of an official.
Replace officials and castle with AI and it is almost the exact same scenario.
1. This is already happening: https://www.wired.com/2017/04/courts-using-ai-sentence-crimi...
Neil Postman discusses this at length in Technopoly: The Surrender of Culture to Technology (1992):
> Naturally, bureaucrats can be expected to embrace a technology that helps to create the illusion that decisions are not under their control. Because of its seeming intelligence and impartiality, a computer has an almost magical tendency to direct attention away from the people in charge of bureaucratic functions and toward itself, as if the computer were the true source of authority. A bureaucrat armed with a computer is the unacknowledged legislator of our age, and a terrible burden to bear.
On that note, Kafka was also quite interested in Kabbalah, Jewish mysticism, which is almost inversely related to sociopolitical structures.
Here's a great aphorism of his that's related to religion:
“Leopards break into the temple and drink all the sacrificial vessels dry; it keeps happening; in the end, it can be calculated in advance and is incorporated into the ritual.”
Whether it's the Catholic church, Youtube or the DMV the point is it's a black box of procedural spaghetti that too often produces insane results and is seemingly unaccountable.
The problem is very simple. Unless social media companies either stop moderating content, which will kill their profits because advertisers will run away, or pay people to moderate everything, which will kill their profit because it'd be wildly expensive, they'll continue to do the half-assed version of moderating they do now using AI and actually make a profit but also make stupid mistakes like this.
Computers and people don't need to operate perfectly. Expecting that is silly. So long as companies deal with errors reasonably it's all good.
We all eagerly await this utopia.
Or maybe expecting that is silly, given the many years of empirical evidence?
Are there? How many historical precedents are there for "curation" at the scale of today's tech giants: Facebook, Twitter, Google, Apple App Store, Amazon.com? IMO none of them do it well, or even reasonably.
> given the number of _potential_ instance of mishap that do not regularly occur
We only hear about the "notable" cases. Every day, people are accidentally or intentionally banned by Twitter or one of the other tech giants, and nobody ever hears about it, because these people are not famous and have nobody else to speak for them and raise a public fuss.
> any the relatively minor consequences of the mishaps that do occur
Regardless of one's opinion on the matter, you can hardly call the banning of the POTUS, for example, from the world's largest and most important social networks as a "relatively minor consequence". This is very consequential. You may think the consequences are good, or you may think they're bad, but it's consequential.
If these "mishaps" are not consequential, why are we all here talking about them?
One common trend for decades has been that "conservative" advertisers quickly acquiesce once they see that their customers do not share such values. Once upon a time major brands wouldn't go anywhere near sexualized content. Now coke/pepsi/ford will pay millions to get their into a Miley Cyrus video. Once upon a time major brands didn't want to be associated with "shock jock" radio hosts. Then Howard Stern made that acceptable and they all jumped on board.
Advertisers today don't want to be associated with unmoderated websites. That too will change as such websites become a norm.
That said, US social norms get more and less restrictive over time. Dueling for example used to be perfectly legal as was harsh physical punishment for children. What we think of as progress is arguably just increasing similarity with our current views. Continue the same mutability and 2200 might seem much worse to us than today is, but they would similarly think of right now as unbearable.
There was far more sexualized content available in the 20s. A girl in a 20s-style 'swimming costume' was the the advertiser acceptable limit at the time for mass media. A Miley Cyrus video would be the sort of content available at 1920s peep shows in bad neighborhoods, content from which advertisers then stay far away. Today they readily endorse it.
She’s a little past prime time TV, but hardly porn.
Or, to put that another way, stopping moderation would lead to a few quarters of negative growth, which would cause shareholders to riot, try to sack the board, etc, and that is exactly why it'll never happen.
That depends a lot on the lead time for dealing with the errors, and the magnitude of the negative impact the error has.
The lead time for Google dealing with their errors seems to average just below infinity. Twitter, admittedly, is quite a bit better.
This gets repeated ad nauseam whenever excessive automated moderation is questioned on big platforms, but I have yet to see any actual evidence of it. I don't believe for a second that the likes of Google, Facebook, or Twitter can't afford to turn down their algorithms and significantly increase their manual moderation efforts.
Platforms like Reddit and Discord get along alright with limited automated moderation combined with a federated mod structure where most community-specific moderation is handled by unpaid volunteers. Those platforms are gigantic, but they don't make the front page of HN daily for illogical, heavy-handed moderation practices.
Besides, nobody expects all moderation to be 100% manual. A lot of content can be reliably identified and moderated by an automated system. Automated systems can even still help with things which they cannot be certain about. Instead of removing the content and taking punitive action against the user, an automated system can just bring the questionable content to the immediate attention of an organization of humans.
Not to mention we're talking about some of the wealthiest companies in the world here. Do we have any idea how much they currently spend on manual moderation? Professional content moderation isn't exactly a highly compensated position. I suspect they could increase their moderation workforce by a couple orders of magnitude without the slightest concern about breaking the bank.
The big social media platforms complain that significant manual moderation impossibly expensive, but they'll say just about anything to save a dime. That's the real reason they rely so heavily on automated systems - to save as much money as possible. As long as it doesn't cut into their revenue, they'll keep doing it. And if people buy their crap about it being impossible, that suits them just fine.
Reddit is constantly criticized for the effects of its moderation policies, both in how arbitrary they can seem, and how they allowed genuinely law-breaking content to stay up at the discretion of those "volunteer" moderators. They are practically constantly in the news.
(Discord exclusively has invite-only communities, so it's a totally apples-to-oranges comparison)
While I would say that reddit's behavior is at least somewhat preferable to the topic of discussion, claiming it's without a similar level of criticism just means you don't actually use the platform.
It's also worth pointing out that what others have voiced regarding AI, that such a strategy just serves to deflect blame from the company, is just as true when relying on community volunteers. I suspect that the main reason that they don't make the front page of HN specifically is because "human makes a bad decision" is several orders of magnitude less interesting to this site than "AI makes a bad decision", regardless of the relative social harm.
The founder(s) have a crazy, innovative, idea to improve the world, and while they're perceived as mad by the masses initially, it's the first followers who give credence and legitimacy to this idea. So the first employees, initial customers, initial investors, and even just random supporters who independently believe in the value what the originators are doing, and that helps propel acceptance within the larger community and eventually the masses.
This is the authoritarian opposite - really terrible decisions and mechanisms have been implemented and are being repeated on a daily basis by powerful companies or organisations, and while the masses find it worrying initially, there are apparently no shortage of apologists online and offline who are falling over themselves to helpfully explain to the rest of us why this crap is ok:
"It's an AI algorithm"
"They're a private company"
"You don't have to use them"
"Well, they're standing against bigotry"
And then suddenly we find ourselves in an environment not unlike the Kafka-esque scenarios referenced. Be wary of "explainers" - often just they're consensus building, but it's not clear whether they're building in the right direction or the wrong one.
"One day, shortly before Christmas, a fly becomes jammed in a teleprinter, misprinting a copy of an arrest warrant it was receiving resulting in the arrest and accidental death during interrogation of cobbler Archibald Buttle instead of renegade heating engineer and suspected terrorist Archibald Tuttle."
If you're worried about them _becoming_ officials, then act now. Leave these platforms and convince others to do the same.
- Exclude you from the largest (and nearly monopolizing) media channels. Good luck getting an online business off the ground when an AI filter bans you from YouTube, Facebook, Twitter, and Google search results. And good luck contacting Google support to contest it, which is notoriously absent for virtually everyone.
- Have you socially ostracized by labelling you as X bad thing, all without you having any recourse or ability to contest the designation. The fact that stuff like this is legal blows my mind: https://www.cbsnews.com/news/yelp-racist-alert-added-busines...
- Collude to affect government policy and prevent competitors from gaining any footholds. Typical BigCo stuff.
We in our consumer role, by not participating in these business models, are the only truly free actors in the system at this time. The only other way out is for government to get bigger and step in, as it's doing in Australia now.
the assumption that we do still have agency, and can exercise power as a collective "by voting with our feet/wallet" is wrong. we (some isolated heroes and activists who care) might be able to throw some spanner into the works or otherwise help things unravel at a slower speed. But things will still unravel in the same way as Jacques Ellul documented and predicted in "La Technique".
To get rid of them is not possible because the only way out we can imagine is regulation which will further cement their position and status quo. There is no future in which these companies and power structures will not be around. I think it is more likely we see the end of the world than to envision a world where these platforms don't exist (maybe not among everyone my age but certainly among everyone 20 years younger than myself who has never experienced society without the Internet/Web)
 The Technological Society https://archive.org/details/JacquesEllulTheTechnologicalSoci...
Legally-required quarantine-checking apps or contact-tracing apps in some countries have only been made available through the Apple and Google Play stores. The Android apps often require Google Play Services, which require a Google account.
The same will likely be true of the vaccination-certificate apps that are being planned in some countries (countries intend to allow you to generate limited-time QR codes, not present a permanent paper certificate).
Consequently, staying on the good side of supposedly private platforms is increasingly necessary to be an ordinary, law-abiding citizen.
If you mean this figuratively, we already have many situations - going back at least to the Arab Spring - of government officials directing policy on social media sites. Secretary Clinton of the Obama Administration spoke about this many times.
If you mean this literally, check out how may high level policy types from social media companies came from and have returned to high ranking spokesman+policy roles in Biden Administration.
But fundamentally, it doesn't require being an official. Even filtering and shaping search results has an impact.. and this has been measured repeatedly. Here's a study from five years ago (aka pre-Trump) which demonstrated it:
I think the present for some people is already a synthesis of all three and if we're not careful it will be for everyone.
If this were deliberate flagging it would mean they were way outside of any reasonable interpretations of their guidelines and users really could no longer post anything and be confident it wouldn't run afoul of those interpretations. This is much harder to fix, and the confusion it wod cause is significantly higher.
No muss, no fuss.
How would you solve this? Your solution will be criticized by the other HN-ers here :D.
Announce a sale of intellectual and physical assets and close down. I would realize the company is a negative drain on society.
A platform so gigantic that it's impossible for humans to intelligently manage it should simply not exist.
You mean that a bunch of your friends is on one platform, and a bunch of others on another platform, and your family on still another one.
But then you have to be on all platforms, so this means everyone else also needs to be on all platforms.
So how exactly do you see this working when reality pushes everyone to use a single platform? because you know, it's a social network.
2) Somehow humanity survived without these networks pre-2007ish.
3) I have no problem with the existence of many smaller special-interest social networks. Those are manageable, and we had them before the rise of the BigCos. The problem is with the existence of a few all-encompassing, world-consuming, general-purpose social networks. They have proven to be completely unmanageable and toxic.
"Abolish Gmail", for instance, doesn't mean abolish email providers. Not at all. And it doesn't mean that "the next gmail" will replace it. Healthy competition is good, unhealthy monopolies are bad.
Your point 2 says you really want to join the Amish.
You mean smaller platforms like Parler or 4chan?
What is my "monopoly theory"? I clarified what I meant: "A platform so gigantic that it's impossible for humans to intelligently manage it". We don't really have a good term for a corporation that has undue, oversized market power. Pedants like to argue "so-and-so is not literally a monopoly!" but this is just wordplay and doesn't really address the serious social issues involved.
> Your point 2 says you really want to join the Amish.
> You mean smaller platforms like Parler or 4chan?
I mean smaller platforms. It would be ridiculous to suggest that your cherrypicked examples are representative. You can make anything look bad by cherrypicking.
It would work like this: by default, nothing is moderated. Instead, every post/tweet/etc. has to be labeled by the user. There can be an auto-suggestion feature to make it a quick process.
Then, other users can choose which tags they want to view or hide. If you don't want to see content tagged with X, no problem. If you do, you can.
In my mind, this would solve a few problems:
1. Justifiably ban people that try to circumvent the tagging system. You aren't determining whether their speech is allowed or not, merely ensuring that they follow the very basic rules of the site. This is much simpler than relying on an AI to interpret a nuanced comment and then moderate it.
2. Allow everyone to say whatever they want. No one will feel silenced.
Again, I'm sure there are holes in this idea, but I think it might be a better approach.
Expecting Twitter not to have their hands full with these kinds of issues is very naive. It's always a question how to spend your very limited resources.
Since the action (for “encouraging or promoting suicide”, bizarrely) was upheld on human review, whether the flagging was automatic or not is obviously irrelevant.
Clearly, this action is unwarranted and the Bishop should be entitled to a full refund of monies paid to Twitter for the privilege of having messages distributed on the platform, at a minimum from the point of the unjustified ban.
However they seem to lack the technical experience to launch a (successful and secure) competing social network of their own. So instead of achieving technological parity and becoming valid competition, they instead aim to impose regulations on the entire industry. It's ironic and disappointing.
You are kidding right ?
Private businesses goes out of their way to obstruct the market.
Google being exemplary proof of this.
It's embarrassing to say you're going to beat Facebook at their own game and then create a business model that's unsustainable from the very beginning.
Why are you commenting when you have no idea what you're talking about?
This has nothing to do with euthanasia, but that's the point. It could be anything, or nothing. Twitter doesn't even have to give a reason for suspending accounts. There is no accountability.
"Just don't use Twitter", some say. But Twitter has 300 million users, and you're supposed to "just" ignore that audience. How about "Just don't sell your product in the United States"? Doesn't sound so great, does it. The excuse is always is Twitter is a "private" company (not really, they're publicly traded). But Twitter is literally the size of the United States, it has approximately the same population. Facebook has a larger population than any country on Earth. These aren't just companies, they're almost nations.
>"Just don't use Twitter", some say.
And, yeah, it's one of the main ways I interact with a lot of people professionally. Losing access would be a major inconvenience and probably even somewhat professionally damaging.
Individuals can't win in this scenario. Like it or not, the big social networks have captured the masses.
“Audience” is specious to me considering the number of bots. Still, it’s an echo chamber regardless of which side you’re on.
So, yeah; I think we would all do well to recognize the immense bias of Twitter and ignore them. Soon enough their investors would punish them for these kinds of actions if we did. As it is, their investors love this nonsense because it results in “eyeballs”. Mine are rolling at Twitters antics.
What isn't? A lot of "hard news" stories nowadays are just republishing tweets. So-and-so famous person tweeted this, so-and-so other famous person tweeted a response, yadda yadda. [James Earl Jones voice] This is CNN.
As someone in the tech world, I spend most of my time interacting with more "liberal" people. I don't hear about this happening with any frequency, accidentally or otherwise, to those on the left. At the same time "diversity of thought" is something I only read about in right-leaning circles. Events like the one shown in this article are why diversity of thought matters. This is clearly an important debate, and Twitter, willfully or systematically, is shutting down a legitimate, notable voice in the debate, and Twitter is ill-equipped to even understand that the point made by the bishop is reasonable (if not universally shared).
That's not actually something I want to know; we've already known that that's how politicians behave for centuries. I'd be a lot more interested to know if social media is influencing the public political discussion in meaningful ways, and how, and to what end. But those questions were never going to be answered in any sort of compelling way by a gaggle of politicians trying to score cheap points in front of their constituents.
Stop pretending the right has no interest in controlling people's lives.
I'm certain that I'm butchering this. Also, to be clear, I am NOT advocating one way or the other on this reasoning, just giving what I can remember about the reasoning. Again, not my reasoning or advocation.
If anyone with a better understanding of the reasoning would comment, that would be great! Thanks.
https://www.vatican.va/archive/ccc_css/archive/catechism/p3s... (yuck, needs a UI makeover)
The crux of it is that suicide rejects the gift of life
"Everyone is responsible for his life before God who has given it to him. It is God who remains the sovereign Master of life. We are obliged to accept life gratefully and preserve it for his honor and the salvation of our souls. We are stewards, not owners, of the life God has entrusted to us. It is not ours to dispose of."
The problem is Catholics are telling others to suffer. Ironically, in the US, it’s under the banner of the “small government” and “freedom” party.
What has changed is Debates + Algo amplification of one side or the other - using pseudo signals like the Like Count or Follower Count.
That changes the story. Its not a debate anymore. Its a mindless game of count accumulation. Given enough time and energy you can find enough misguided people in the world to validate whatever you believe.
I mean it's kind of tautological to say that you don't hear a lot about censorship of left-wing opinion if you're situated in a left-wing social environment. For a long time I worked in a very conservative community as a very left-wing person and if I had spoken my mind I could have probably packed my bags. Try having an outspoken atheist debate in a very culturally conservative community. I went to a catholic private school as a kid and if I had actually said what I thought about religion I probably would have gotten hit with a ruler or something.
Which obviously isn't to say that you're not right in principle, obviously open debate is good and all, but what you're describing isn't just occurring in 'liberal circles'. People always love to promote diversity of thought when they happen to be in a minority position.
But I don’t understand why Twitter would block the Bishop for saying what he said. And the reason they gave, that it “promotes self harm”.. that’s straight up Kafkaesque.
In the Netherlands a ‘hilarious’ scene occurred when a doctor showed up and killed a patient who was screaming not wanting to die in front of their family. Apparently the patient was not in a mental state to decide not to die after previously being in the state wanting to. Ha
"After being diagnosed with Alzheimer's four years before she died, the patient wrote a statement saying that she wanted to be euthanized before entering a care home - but that she wanted to decide when the time was right.
Before she was taken into care, a doctor decided that assisted suicide should be administered based on her prior statement. This was confirmed by two separate doctors independently and a date was set.
When the day came to end the woman's life, a sedative was put in her coffee and she lost consciousness.
But the woman then woke up and had to be held down by her daughter and husband while the process was finished."
Because it's explicitly against the rules.
"You may not promote or encourage suicide or self-harm." https://help.twitter.com/en/rules-and-policies/glorifying-se...
It's an official procedure, assisted by doctors, and not allowed spontaneously for everyone.
That would be like saying a limb amputation is illegal to talk about on Twitter, because amputating a limb equals to self-harm.
In my experience, social media moderation doesn't make much room for this sort of nuance.
Because of suicide contagion, it's not exactly analogous to discussion of limb amputations. Even though discussion of assisted dying/euthanasia isn't the same thing as discussion of specific suicides (as per suicide contagion), the policy still needs to be well considered.
It is definitely a step beyond physician-assisted suicide, and the two should not be confused or used interchangeably, though both are morally repugnant.
Especially "It does not require the agreement or desire of the person being killed" is absolutely, 100% false.
That's called euthanasia, not physician-assisted suicide. And if we can't acknowledge the difference, we are opening ourselves up to allowing what I described because we already allowed "euthanasia," when in fact what we really allowed was PA-suicide. The fact that both are murder in a moral sense doesn't change the necessity of distinguishing between them.
The human reviewers just took that same context not knowing the politics of assisted suicide.
This is such an interesting topic, because while euthanasia can be considered a way of suicide, advocating against it can also be considered advocating for painful suicide (since some patients will still suicide, just in a much worse way).
Note that the bar for euthanasia is incredibly difficult - and it should be - in most countries that allow it. A draft to make it legal was approved recently in my home country, Spain, and to apply it you need to do:
Prequisite: have a severe incurable illness.
1. Day 0. First written application.
2. Day 2. Doctor discusses with patient the diagnosis, treatments and their results, and other kind of alternatives.
3. Day 15. Second written application.
4. Day 17. Same as 2, Doctor discusses with patient the diagnosis, treatments and their results, and other kind of alternatives.
5. Day 17. Ask if the patient wants to follow up.
6. Day 27. Doctor consults with a different kind of doctor to approve situation.
7. Day 30. The president of "guarantee and evaluation" of the "state" has to be made aware of this.
8. Day 32. The president designates a doctor and lawyer to verify everything is okay.
9. Day 39. These new doctor + layer present their report.
10. Day 39. The patient signs and chooses modality of death (they can self-administer the substance or have it administer by a nurse).
Note: days here point some times the shortest possible dates, e.g. Day 15 is actually "from 15 days since Day 0" and some times to the longest available "up to 2 days since Day 0".
I'm very much pro-euthanasia, because I don't plan on having any kids and consider the worst of all possible deaths to be gradually losing my mind in assisted housing (well, being eaten by a wild animal holds more terror for me, but the other one is close). Obviously I don't know how enthusiastic I'll be when the time actually comes, but I'm in my 50's and the thought of slipping away painlessly when I think the time is right holds no fear for me at the moment.
Religion also plays a part - I'm atheist and so not afraid of losing my place in valhalla because of suicide. Dad's an intellectual Anglican and so suicide is morally less acceptable for him. But this didn't come into the discussion much - he genuinely fears for the consequences if greedy children are allowed to persuade their parents to kill themselves.
And even if we simply leave the will part out of this, if they are pressured by "loved ones" to end it with dignity, and they agree, who are we to disagree? If they don't agree then it's a crime to harass someone to death (whatever the method).
What am I missing? Could you explain your father's argument? Thanks!
Well, my Dad for one ;) Family dynamics can get very ugly. If a family is emotionally manipulating any of its members into committing suicide, he feels that is wrong. I tend to agree, this feels wrong to me, even if everyone involved is saying that they're happy with it.
At the moment, all of this behaviour is legal, right up until the parent commits suicide. There's nothing the lawyer can do because there's no expressed intention to commit suicide (just the repeated "I don't want to be a burden" I guess - I'm never witnessed this myself). I think we could incorporate language into a Euthanasia bill to cope with this situation better than it is currently, but Dad disagrees, citing his experience of how manipulative families can get.
To summarise his view: families can be evil to each other. Some families already manipulate aging parents into suicide, which is wrong and should be opposed. Making assisted suicide legal and giving it a framework will encourage it, and any procedural hurdles won't stop that.
Yeah, no questions about that.
Ah, okay, thanks for the explanation. Yeah, I understand this argument. Emotional abuse, exploitation of vulnerable groups (from elderly, minors, to folks with disabilities, homeless, unemployed, persecuted minorities, etc. etc, alas the list goes on for long) is already a problem, and in many jurisdictions it's already pushed back against.
I think protective care would help more if it were independent of euthanasia, because it would mean that if the manipulated relatives find out they have been manipulated and then change their minds about euthanasia then they don't have to go back to their manipulative relatives. (Duh, I know, but this catch-22 problem is very endemic in a lot of public "safety net" setups.)
> Making assisted suicide legal and giving it a framework will encourage it, and any procedural hurdles won't stop that.
It meaning manipulating others into it? Yes, that's probably tautologically true. But, that's the plan, to make it easier anyway, as it should decrease suicides, and even more importantly it should decrease time spent in misery and suffering. (And as I mentioned pushing back against exploitation should be a priority anyway.)
Going against legal alcohol yields more illegal alcohol.
Going against legal X yields more illegal X.
Same concept, going against legal suicide (euthanasia) yields more suicides. Sure overall the number of suicides might go down, maybe even very noticeable, but the quality of care would suffer tremendously. I'm not arguing on suicide vs suicide, but on the quality of it.
I believe it's better if N people decide to end their lifes consciously, surrounded by loved ones in a medical facility painlessly than if M people commit suicide alone and ashamed at home.
1. Take 'Going against legal X yields more illegal X': This is a meaningless statement. For example, going against legal murder yields more illegal murder. It is a non-statement, tautological.
2. Take a revised and more debatable statement: 'Going against legal X yields more X' (much rarer, but possible, for example cocaine proliferation brought about by prohibiting less damaging drugs). This deserves careful consideration. Do our laws assume from the outset the required enforcement to enable their utility? In some ways yes, in other ways no (for example, our laws shouldn't need to take into account politically motivated lack of enforcement).
3. Even if we say that a particular law leads to _more_ of the exact averse outcome and the legislature should have known the challenges of enforcement - illegal drugs being a good example of this, or illegal immigration, it _still_ doesn't follow in all cases that the answer is to decriminalize. A debate must be had, for sure - and some change is required, but that change may be subtle, enabling better enforcement for example.
However, your above logic as it stands is essentially an argument for anarchism, though I'm not sure you intended it that way.
For example with murder we have many cases of vigilantism, the cause was a broken police/justice system.
Every kind of substance crime? War on Substances! And it's a total failure.
"Tough on X" ("enforcement") sounds good, but our whole history is about how tough never really works. (Law and order, machismo, denying and ignoring real causes, and putting on a big show about the effect, those are the magic ingredients to totalitarianism.)
Anyway I'm not saying I agree, just that this could be the reasoning behind saying that the tweet promoted harmful suicides.
That argument is rationalization, and not material, regardless of which subject you substitute for X.
I have no idea if the church considers euthanasia, suicide or murder or both. A lot of people would not consider euthanasia suicide since someone else is agreeing you can die and doing it for you. If you can't legally get euthanasia you might have to suicide early before incapacitation.
"Uitzicht" is your view on the future. "Uitzichtloos" means your future is empty, have nothing to look forward to. As an adverb for suffering ("lijden") it conveys nothing but endless suffering, without future, without hope.
In the same way that advocating against guns can be considered advocating for switchblades.
She has been suffering from early-onset Alzhemier's disease. A little over two weeks ago, she was admitted to the hospital because she lost the ability to swallow. Because of her deteriorating condition, they decided to send her to hospice care without life support. For two weeks, she had no food or water, save for the tiny amount in the morphine and atropine syringes they have her. No IV drip. Neither the medical staff nor my family expected she would survive that long. Two weeks of death by starvation and dehydration was horrible to witness.
Almost a decade ago, suicide impacted my family, so I had a lot of reservations about self-imposed euthanasia. Now, after seeing what it was like for my aunt, I'm not so sure anymore. We tried to visit her every day and play her the music that she wrote and loved (she was a musician). They say hearing is the last sense to go, so hopefully maybe that meant something to her, and she did respond after the first few days, but for two whole weeks... It seems excessive, and I wouldn't blame someone for pre-empting that.
I bet you could find plenty of educated native English speakers who would accidentally flip the meaning of that tweet around if forced to skim it and 200 other “probably bad” ones every hour, every day, all week.
So there’s probably no real news here except that big tech companies don’t do support tasks well.
A lot of my acquaintances and friends have practically zero exposure to what caused the BLM movement and during last summer there was a significant number of people posting stuff along the lines of "all lives matter".
To illustrate the absurd effect when "the words seemingly mean a good thing, but their actual meaning is flawed" I posted "All lives matter, arbeit macht frei." on Facebook. I was suspended for 24 hours and my appeal was rejected as well.
Ever since then, I wondered a few times if actual people could have interpreted it as offensive.
And I have since then always arrived to the conclusion that Facebook cannot interpret irony, satire, sarcasm and reflection.
Sarcasm has to be the least effective possible way to communicate anything, anywhere. You are absolutely begging to be misinterpreted or misunderstood, either legitimately or even deliberately. And you can't really defend yourself because 'it was sarcasm' won't cut it as a defence with many people - it sounds like 'it was a prank bro'.
Why would anyone choose to communicate about a complex issue this way?
> I posted "All lives matter, arbeit macht frei."
This seems positively suicidal - I can't imagine what good you thought could come of this!
And also see the difficulty that the 'abolish the police' movement have gotten into explaining that they don't literally mean abolish, especially since some of them do literally mean that, so you get really stuck trying to explain you were trying to make a point, but yes those other people other there who used exactly the same words as you did mean it literally but that wasn't quite what you meant you were using it for terseness... etc. Why give yourself this problem?
Seems a really bad tool to try to use for anything. Be straightforward with your communication. Don't give people an opening to attack you for no other reason than trying to be whimsical in your writing.
Plus on a realpolitik level it doesn't really matter what the slogan is. Really. See how tha ACA was turned into Obamacare and death panels. Yes, independents initially might be confused, yes a dumb slogan doesn't help, but fundamentally if the political and social will is there, the slogan does not really matter.
Just because a few people are too dense to understand context doesn't mean we should restructure what we allow in society to accomodate them.
(Is this good? No of course not, it's a very sad state of affairs that somehow constructive de-escalatory discourse is not incentivized on these platforms, and preaching to the choir, virtue signalling, trolling and so and so are.)
Same reason other people make bad communications. They assume everybody else thinks the same as they think, and know what they know, and know nothing else and think nothing else.
And there was context - a series of long form posts that I have published over the years standing up for all kinds of minorities, from gay people to refugees being dehumanised actively, on huge billboards, by the government.
But you do you.
I'm still confident that the core of the message was on point, even though I accept that the form was chosen in a moment of anger - I saw a locally acclaimed artist post a very well designed "all lives matter" poster and was disgusted by how everyone was cheering on how positive their message was and I couldn't help but imagine how someone applauded the typographer who created the slogan at the gates of Auschwitz.
I reckon it would have needed more literally benign slogans, or have the idea further developed.
I'm sure you had some internal logic as to how you got from some modern slogan to the holocaust, but without actually explaining your thought process how exactly is anyone else going to understand that logic?
I'm not sure if it would be offensive to make a comparison, but equating bad thing with bad thing without nuance doesn't make you look clever.
I was illustrating how people come up with slogans that sound good for things that destroy lives.
Did you consider writing just this instead? Why use sarcasm for something that can be said plainly and straightforwardly instead?
Contrast can be a tool, and I could have written a long and boring essay on how sad and disgusting these phenomena are or just put them up against one another.
Do you think something like this would have worked better? I'm quite the RATM fan.
Some of those who said
Arbeit macht frei
Are the same who say
All lives matter
This is is what I meant.
I would understand your confusion if you had made a long joke or written a sarcastic story, but posting Nazi slogans without changing or doing anything to them is not really humorous (to me).
To the public, there's no difference between you posting these slogans, and an actual Neo-Nazi doing the same thing.
Maybe I just don't understand the point you're trying to make.
I do understand what you mean by people posting slogans they don't understand. In that case I'd still put the blame on the people. If people just happily post slogans without researching what they mean, then that's not BLMs or Facebooks fault.
Both situations are a bit unfortunate, but I think we have talked enough about what "Black Lives Matter" means, to the point where it's a quick google search away.
I found that people around me did not understand the BLM context, because there is no local context. "Roma lives matter" could be a local context, because there's a lot of discrimination against roma people.
I was not making a joke, I was angry at how commonplace the hatred was from people who might not have seen a single black person apart from the cinema screen.
The police brutality context is also kind of lost here, because while there are a bunch of dirty cops, physical brutality from them is practically unheard of.
The holocaust, on the other hand is a very real thing. People were deported to Auschwitz and other camps from the very _street_ I live in.
I just hoped people would realise how that certain good-sounding slogan is not much different from the contemporary good-sounding slogan.
Again, maybe I should have added an explanation, but I felt like it took away from contrast and I was very fed up with the amount of backlash towards what people overseas were standing up for.
Had conversations with friends about the (lack of) contrast between the two slogans and basically everyone understood the point. Facebook did not, but as I said, I'm aware that this was an edgy form for my point.
Regarding whether there's a difference between who is saying what: well of course there is, the context of this post was my posting history standing up for various sidelined groups and minorities and the audience of this post were my friends who very well know that I am as far from using either of the slogans in an agreeing manner as is Trump from a PhD in psychology.
> There is dignity in dying. As a priest, I am privileged to witness it often. Assisted suicide, where it is practices, it not an expression of freedom or dignity, but of the failure of a society to accompany people on their "way of the cross"
Twitter then flagged it for "promoting or encouraging suicide or self-harm". Obviously it is not, but this is a rather complicated sentence and I can imagine a false positive from AI. It sounds like it was appealed and then a human upheld the flag, but hey, I can't imagine that the person who upheld this honestly understood the sentence either.
This is not "wokeness" or "cancel culture" as the post here describes... just bad moderation. They need to hire @dang.
This is actually a major problem with arguments that try to dismiss the need to have ideological diversity in these institutions - cancel culture can arise from genuine good faith moderation if those moderators also happen to just have extremely biased ideological priors, eg. "we need to fight back" is taken as a literal threat of violence when said by the outgroup but not by the ingroup.
In this case, I think we need to be vigilant against NLP models meant to flag content that may not explicitly build in ideological biases but incorporate feedback loops that will reinforce them - eg. they're meant to detect posts that will likely be flagged by users / human moderators, but the baseline flag rates are ideologically-biased.
Unsurprisingly, the woke are perfectly able to understand these concerns when it comes to concerns about say AI coming to exhibit latent racial biases. But when the same mechanisms may cause ideological biases there's a telling lack of concern.
Maybe, but this is a bad example of it. Flagging posts that encourage suicide isn't a left vs. right issue... it seems pretty bipartisan to me. The fact that the AI made a mistake on a single example here isn't even indicative of bad AI... this could happen (and did happen in this case on appeal) with human moderators too.
For example, maybe the tweet caused a lot of harsh backlash for ideological reasons and that makes it more likely for Twitter to action a post for any reason, and the model is just making a softmax prediction of what that reason is. That's something that we should find discomforting.
I don't know what this website is, but it looks like they really want me to subscribe, and I guess they need some content to make me think there is a problem they are solving :)
on edit: basically whoever read it probably did not have the English language skills to realize it was against Euthanasia, as the phrasing was more complicated than needed to make that point.
I wonder how impartial a human has to be - would the outcome be different depending if the reviewer was a devoted catholic or an atheist?
Of course, in neither case should the person be given the job of reviewing English-language tweets.
There are additional possibilties, such as Twitter having created an environment in which it is much safer or easier to agree with the initial ruling, or where the reviewer is required to pick from a list of justifications for reversing the initial decisions, and none of the options fit (I'm sure you have seen questionnaires like that.)
The one thing we can be sure of is that the Bishop should not have been banned.
It's quite conceivable these human moderators are expected to get through <X> tasks per <time period>, and perhaps the values you plug in there are such that it only leaves a very short period of time for each one.
Surprised he was never banned on that account by Twitter. He's quite the controversial Catholic figure even in Ireland.
I think secular society (i.e. Big Tech, mainstream media, governments) is treading a very interesting line at the moment, because it simultaneously is ( or at least appears to be) embracing all faiths and cultures, while denouncing them as being inherently hateful (Christianity, Islam).
The idea that they will ever reform and completely 180 on teachings they've stood by for thousands of years is naïve.
And if we stand by the idea that we simply cannot tolerate hateful speech, with the definition of hateful becoming broader every day, where will we end up?
The next 10 years will be very interesting
This made me chuckle as I’ve had similar thoughts. It’s like being surprised when a Mormon complains about drinking coffee.
“But yeah, he’s a Mormon. That’s what they do?”
Or to theorize intention, maybe someone was looking for an excuse to ban him.
Although I would prefer even more if Twitter went offline just like Parler did. I consider Twitter a net negative on the world honestly.
This just seems like a non-story that will get resolved once the right person at Twitter sees it, that is being pushed by a biased website in their fight against what they perceive as "cancel culture".
Which only validates the existence of that "biased website" if Twitter and other giants in the industry decide to ignore such issues otherwise. Except for Google, which just won't care at all.
That one also seems to clarify that he didn't actually get "locked out", the Tweet just got removed:
> However, Bishop Doran’s profile remains active, although the tweet in question has been removed.
If you can shown me one of those I might be convinced that there's something going. But a single anecdote like this that can easily be explained in a less conspiratorial way is not particularly strong evidence.
Would there? Or would such blocks simply be the result of strong, shared political leanings at Twitter , without any need for central organization?
In age when so much is being attributed to unconscious bias and systemic effects, this seems like a strange spot to draw the line that now we need evidence of deliberate organization.
 Twitter is so liberal that its conservative employees ‘don’t feel safe to express their opinions,’ says CEO Jack Dorsey - https://www.vox.com/2018/9/14/17857622/twitter-liberal-emplo...
> bias is inescapable, you can’t stop doing it even if you don’t want to be biased, it’s impossible not to have an implicit preference for your race or identify group over others, so it’s critical that you try to mitigate these biases by making sure you’re part of a diverse group
> the fact that almost every employee of a big tech company is on one side of an intensely polarized political divide has no impact on the decisions the company makes.
Keep in mind, the "intelligence" of those AI algorithms comes from datasets of previous behaviors. Given the popularity (?) of the cancel mindset on this particular platform it's reasonable to expect a bias in said data; and in turn the algorithm.
Rinse and repeat.
That said, yes apparently so. Else it wouldn't have "canceled" this idea.