Hacker News new | past | comments | ask | show | jobs | submit login
A YouTube chat about chess got flagged for hate speech (wired.com)
444 points by lxm on July 22, 2021 | hide | past | favorite | 424 comments

We need a word or phrase for this phenomenon, where we attempt to substitute human pattern recognition with algorithms that just aren't up to the job. Facebook moderation, Tesla Full Self Driving, the War Games movie, arrests for mistaken facial identification. It's becoming an increasingly dystopic force in our lives and will likely get much worse before getting even worse. So it needs a label. Maybe there's a ten syllable German word that expresses it perfectly?

Not quite the same situation, but


is used by Private Eye (British satirical/political magazine) to highlight adverts auto-generated inappropriately to accompany articles on news etc websites.



Of all the suggestions, I like the "Malgorithms" the most. It is catchy, easy to read and probably understandable by normal people as well.

Yeah, it's definitely the most sellable one I've seen.

I don't like the implication that it's the algorithm that's malicious rather than the person who wrote the algorithm (no algorithm is inherently malevolent or benevolent in my opinion, it's just an algorithm), but I also know this distinction is completely pointless for the vast majority of people and "malgorithm" gets the gist of what people are getting at across very well.

While "malevolent"/"malicious" definitely has a "wicked" connotation, you also see it in words like "maladapted" or "malodorous" which are just "bad" without the "wickedness".

That's still dumping a "bad" on the poor old algorithm though, which has done nothing wrong simply by existing and doing what it is programmed to do. Algorithms aren't bad, it's the programmers who write them and the managers who decide they should be written who are bad when things like this happen.

I don't see how you could believe this point, unless you think things like "There's no bad music, only bad musicians" as well.

Perhaps an elucidating counterpoint is an algorithm written in such a manner that it is deliberately worthless, as a joke (e.g., StackSort, Bogosort). Obviously they're not the result of a bad programmer, they're just an inherently bad algorithm.

>unless you think things like "There's no bad music, only bad musicians" as well.

I actually do though, and I say that as a musician myself. These things are inherently subjective, writing good music isn't just a matter of how closely the musician adheres to a pre-determined set of rules. The qualities of goodness and badness exist in the minds of the creators and the audience rather than being attached to the music itself in some sense. Any attempt to classify "good" versus "bad" music in the sense people usually understand it is just an appeal to authority fallacy, the only thing that makes music good is "do I personally enjoy listening to it or not?". You can try to classify music based on how closely it fits a genre's set of rules but this quickly breaks down into absurdity in practice (for example, acts like the Grateful Dead which span many genres).

Are there bad smells?

Not really, the fox crap that smells awful to me smells wonderful to a golden retriever. The "badness" of the smell is entirely down to the nose that's smelling it, the subjective experience of smelling comes from the mind rather than the particular chemical compounds which we understand as a smell.

That subjective information which describes the badness of a smell doesn't exist within the smell itself, it exists within the mind.

Interestingly, "mal" in french means "bad". In latin, "malum" means "that leads to something wrong".

So "Malgorithms" it an accurate one.

That's just it though. Part of the definition of "algorithm" is "correct" and these things are all just ML output that generate correlated noise.

Hmm? I don't think an algorithm has to give "correct" answers, it just has to be precisely defined. For example, one could say "For this problem, a greedy algorithm yields decent but suboptimal answers."

Merriam-Webster online says: "a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation" and "broadly : a step-by-step procedure for solving a problem or accomplishing some end".

Algorithms solve problems. Wrong answers are not solutions to (i.e. do not solve) a given problem. Hence, algorithm implies that it provides correct answers within the parameters of the problem.

> Hence, algorithm implies that it provides correct answers within the parameters of the problem.

And so they always do. The problem that an algorithm solves is defined by the algorithm itself. The purpose of a system is what it does[0].

Algorithms are useful to solve problems for us - the trick is always in making sure that the problem the algorithm solves is the one we want.


[0] - https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

That's a bingo.

Disagree. We see this phenomena all the time: Right solution, bad input data. Right solution, wrong problem. Worlds turned to grey goo by replicators working to some technically correct algorithm.

These will never give the right solution, only something close.

The definition of algorithm is wider than you think. As your parent poster noted, "greedy algorithms" exist (as do many other algorithms which provide suboptimal answers). You can easily verify this by googling.

I love malgorithms having been exposed to the malicious grift of the healthcare industry.

If you coined this it's great!

Scunthorpe problem [1] is used to describe the false positives for auto filters, which are often results of naive substring matching. In a way, the current problem is similar, but on the semantic level.

However, it doesn't cover other cases for other AI mistakes you mentioned, like self-driving.

[1]: https://en.wikipedia.org/wiki/Scunthorpe_problem

The problem also goes in the other direction. That is platform rely too much on automation that human signal gets too faint. For example humans have a hard time flagging actual hate speech on these platforms as well. Another example: Every so often there is a front page post on HN where a company like Google will automatically shut down service for a customer (false positive). The customer has a hard time getting through and having this false positive corrected because their signal can’t reach through the layers of automation.

The concept of "so-so automation" [1] seems relevant: innovation that allows a business or organization to eliminate human employees, but doesn't result in overall productivity gains or cost savings for society that could then be redistributed to the laid-off employees.

I think so-so automation often is used places where there's a lot of zero-sum conflict between workers and management, or where the work itself causes a lot of negative human externalities. (This can be a good thing: it's probably okay to settle for a "worse result" from an automated system if it eliminates a lot of physical or psychological harm to people...some content moderation issues probably fall under this case, but not this one.)

[1] https://mitsloan.mit.edu/ideas-made-to-matter/lure-so-so-tec...

I’d call it “plausible automation”.

It enabled things like Facebook to displace message boards for the most part. Look at Facebook or Reddit, they are barely able to police obvious noxious behavior in English, and military juntas in Myanmar are able to organize on the platforms.

As a non-native English speaker I read "so-so automation" like "sus-automation" and found it oddly appropriate.

I figured it would be appropriate to ask an AI to come up with a label. So I prompted GPT-J with:

AI has proven to be terrible at moderating human content, we felt we needed a word for these kinds of buggy systems and have chosen: "

And GPT-J answered:


Which seems to capture the spirit of unreliable AIs pretty well

"Totalitalgorithms" (Totalgorithms?) captures the spirit of these algorithms. They seem like bugs but they're actually undirected, organic features of a total technocratic political system that is rapidly coming to dominate life in our modern societies. The filters will be tuned but not fixed because they aren't broken. They're part of what Tocqueville described as 'soft despotism':

"Thus, After having thus successively taken each member of the community in its powerful grasp and fashioned him at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided; men are seldom forced by it to act, but they are constantly restrained from acting. Such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to nothing better than a flock of timid and industrious animals, of which the government is the shepherd."


Algorithmic totalitarianism resonates strongly.

It's not a human rights violation if it isn't a human doing it!

The human's rights were still violated even if it was a machine that did it.

> It's becoming an increasingly dystopic force in our lives and will likely get much worse before getting even worse. So it needs a label.

It's not just something that happens to us, it's something we do. We need to do better rather than accepting it as inevitable, and let future generations worry about what the best name for it was.

> Maybe there's a ten syllable German word that expresses it perfectly?

That being said, I propose Urteilsfähigkeitsauslagerungsnekrose: The necrosis that follows the outsourcing of our capability of judgement.

11 syllables, almost as requested ;)

May I suggest “yacht-feet automation”? It focuses on increasing the yacht feet of Zuckerberg, and doesn’t do anything else right?

Jachtverlängerungsautomatisierung ;-)

Also 11 sylabels.

Man, German can be such a nice, beautiful language for stuff like that!

"It's not a feature, it's just another yacht foot". I like it.

> Urteilsfähigkeitsauslagerungsnekrose

Post-reason-delegation necrosis?

But English is always a bit awkward for this sort of thing.

It's easy to blame this on imperfect technology but I'm not so sure. Couple of months back when all tech companies started their holier than though publicity campaigns with token actions we faced the same issue.

"Blacklist" was banned as term because it was deemed racist. No matter that people understand black and white outside the race issue. So if blacklist is deemed by people , not technology, as racist; thus removing context from the equation, why is AI wrong to assume that "attack the black soldier in C4" is hate speech?

It's why feature extraction/stat bucket AI sucks.

Human interactions have a social/cultural context. You can't just recognise $thing, you have to recognise how $thing depends on $context for the correct interpretation.

Current AI either ignores context completely or doesn't parse it correctly.

It's a rediscovery of the ancient "Time flies like an arrow, fruit flies like a banana" problem.

If you're out on a date and you say "Let's go back to mine" it implies one thing. If you say it to some friends after an evening out it usually means something completely different.

Sometimes it means the same thing - but you need to know a lot about the people involved to be able to infer that accurately.

And sometimes humans can't parse these nuances accurately either.

AI-by-grep or stat bucket can't handle them at all, because the inferences are contextual and specific to situations and/or individuals. They can't be extracted from just the words themselves.

Minsky & co researched some of this in the 70s, and eventually it motivated the semantic web people. But it was too hard a problem for the technology of the time. Now it seems somewhat forgotten.

I use the phrase "K ohne I" since years already. Which basically means "künstlich ohne Intelligenz". We all saw this coming. The topic has been gone over in scifi literature. And still, big tech decided its time to roll it out. "A human would also not be perfect, and we claim this algo is better then the avg. human" is the last thing you hear before discriminating tech is rolled out. And since politics is in the grip of commerce, regulations will not happen early enough. We are fucked. 2040 will be horrible.

I don't have a word for the phenomenon, but the problem reminds me of a quote by Wilfrid Sellars.

"The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term"

call it maybe philosophy-lacking, or worldview-lacking, but understanding how things 'hang together' in broad terms is precisely what our 'intelligent' systems cannot do. Agents in the world have an integrated point of view, they're not assemblages of models, and there seems to be very little interest than to build anything but the latter.

It's basically perception and reaction without cognition.

In the natural world we would call that instinct. So maybe 'artificial instinct' if we want to keep 'AI' or 'synthetic instinct' because I think it sounds better.

It is basically about using the wrong tool for the job, and continuing to do so even after you (and everyone else) is fully aware of just how wrong you are. Personifying the tool would just distract from the root cause.

"I was using a ballpeen hammer to pound in roofing nails, everything was going great until the hammer's 'synthetic instinct' resulted in a painful blow to my groin and a near fatal three story drop. It'll go better next time - I've painted the ballpeen hammer a different color."

If a human made the same mistake, we would call them incompetent, careless, and negligent.

- incompetent system

- incompetent robot

- incompesys

- incompebot

- inept system

- inept robot

- inepsys

- ineptobot

- inepobot

- bunglebot

- hambot

- sloppybot

- careless system

- careless robot

- carelessys

- carelessbot

- neglisys

- negligent robot

- neglibot

I like 'neglibot'. "YouTube neglibotted my video." "This is my new email address. My old one got neglibotted." "The app works for typing in data, but the camera feature is neglibotty." "They spent a lot of effort and turned their neglibot automod into carefulbot. In another year it may be meticubot.

Slightly related new word: "copiloting" - copying by AI (copy + looting), e.g. GitHub copiloted opensource code.

Then why not copylooting...

Neglibot is nice. Dumbot works too...

IMHO, it's a case when a bug in an Artificial Intelligence system produces the Artificial Idiot, so "AIdiot", or "AdIot"

Perhaps Artificial Stupid System, you do the acronym yourself...

I love Dumbot. Has a beautiful ring to it. Can definitely see it catching on

Dumbot works with the Carlin rule of improving the word by putting fucking in the middle: dumfuckingbot

Artificial Idiocy

Artificial Incompetence

sounds like “artificial intelligence” but really captures the fact that it’s just bad to a harmful extent at what it’s supposedly trying to do.

I like this. It's one tweet/article headline away from going completely viral.

Industrial revolutions has happened few times in the past, and every time it occurs, we change our world to adopt it.

I can't help but wonder, what the world would look like if services in the future will be provided primarily by AIs. How do we adopt them? Do we have to invent a "New Speech" just to make AI understand us better so we can live a easier life?

"Question: Have you consumed your food today?", "Answer: I have consumed my food today."

Or a more subtle example:

"Hi! Welcome to ___. What do you want to eat today?", "I want to eat item one, fifty-six, eighty-seven and ninety-one", "Do you want to eat item one, fifty-six, eighty-seven and ninety-one?", "Yes", "Please make a seat. The food will come right away!" (all without making any Hmm or Err noise)

> "Hi! Welcome to ___. What do you want to eat today?", "I want to eat item one, fifty-six, eighty-seven and ninety-one", "Do you want to eat item one, fifty-six, eighty-seven and ninety-one?", "Yes", "Please make a seat. The food will come right away!" (all without making any Hmm or Err noise)

this already happens today, with human servers. The menu items are numbered, and you tell them the number instead of the name of the dish.

And then in some places they hand you another number - a pooled session identifier - to take to your table. Then you are expected to respond to events broadcast regarding this number if the food fails to reach the destination automatically.

Blockchain Chicken Farm gives an interesting angle on this. Part of the book describes a AI controlled pig farm. For the outcome to be good, as many variables as possible have to be removed from the pigs' life. For example total isolation from the outside world. Otherwise there is too much for the AI to account for and the training set also needs to increase. What does that mean for our lifes as AI gets control over more aspects? What variables can be removed?

We're going to resurrect the transatlantic accent.

I already have started add a hard 'ng' on 'going to' because my phone keeps abbreviating it to 'gonna'.

That would actually be cool also for non-native English speakers :)

Yeah that's probably not goinv to work for India and the rest of the anglosphere.

Yea, I'm Trinidadian, and even though my accent has changed significantly from living in Canada for 7 years, people and especially voice recognition get confused by some of my speech patterns.

Example is that people always hear 50 when I say 30, because of how I pounce the "y". Anything ending in "th" or "ing" gets confused a lot by people who don't know me.

It's kinda funny to keep track of :)

> goinv to

Did you succumb to the exact same problem GP comment described?

A new vernacular to interface with tools that never reveal the actual state of a system under their control, and forbid you to directly influence it. You are granted the privilege to express your limited desires, from which the system will "learn" your preferences.

If you want a vision of the future, imagine a man repeating "Hey Thermostat! Can you cool it down in here a little?" over and over–forever.

> Do we have to invent a "New Speech" just to make AI understand us better so we can live a easier life?

It's already happened. Have you ever seen an a grandparent unfamiliar with new tech talk to Alexa? There's a ton of them on Youtube.

In a government system a similar problem is called Bureaucracy. It is similar in the sense that the system is very complex, beyond any single persons comprehension, bureaucratic system is unforgiving in its conclusion, and it is the responsibility of the victim to deal with a false positive using the same (or similarly complex) system to attempt correction.

However this is different from bureaucracy in the level of automation and statistical inference. A bureaucratic system doesn’t do inference (or at least that is not its main function), and the steps between require human inputs (albeit really automated humans most of the time).

I suggest automatacracy which strings together automation and bureaucracy.

Why not algocracy or robocracy?

Computer says No!


Composite noun of verschlimmbessern [1] and Automatisierung (automation).

[1] https://en.wiktionary.org/wiki/verschlimmbessern#German

How about calling it a "Buttle" after the movie Brazil from 1985 where a certain Mr. "Buttle" gets arrested and killed instead of a Mr. "Tuttle" due to a fly in a teleprinter.

Nowhere near as popular as the skit with the vikings.

Artificial Intelligibility.

We are often reduced to mere conformity to what the artificial intelligence can make sense of.

Captcha: "Are you human?"

Human: <Goes on to do a simple perceptual task even a cat could do if they had fingers>

Superficial Intelligence.

It's funny that you should mention War Games, because the only way to win this battle is not to play at all. Why are we so hell-bent on restricting speech and burning all these engineering hours trying to moderate something that cannot be moderated? Languages -- and people -- are "transformable" enough to avoid triggering "hate speech" (whatever that actually is, and whoever it is that determines it) algorithms. Let people downvote or shut their computer off if they don't like it, and leave it at that. Are we that scared of words or ideas?

I used to live in a communist country (probably the same one that Mr. Radic lives/lived in), where "hate speech" -- which was called anti-state, or anti-establishment speech back then -- was punishable by a middle-of-the-night visit by dark-clad police. Yet, people still found ways to openly criticize the Party, and there were even popular songs that openly defied them, which only demonstrates that not even human pattern recognition is good enough to detect these things, as you state in your first sentence.

It's a waste of time, but more importantly it is detrimental to society.

I am scared of the masses believing nonsense. What fraction of Americans believe the election was stolen? How many people are still going to refuse to be vaccinated? Indeed, bad ideas are scary. I am scared that my country people will lead another insurrection or allow themselves to be a needlessly vulnerable vector for highly transmissible and dangerous virus.

Misinformation spreads like wildfire.

The notion that good speech counters bad speech relies on rational, well informed, critical thinking skills that are lacking in a significant portion of the populace.

It is fundamentally different that a company wants to exercise its right to free speech in choosing what is said on its platform than for a government to use free speech as a fig leaf cover for dissent.

Putting up no fight against misinformation and hate speech is not a winning strategy. Society loses much more from nonsense spreading than it does from having to wait 24 hours for high quality chess content to get remoderated.

It’s not just about a game of chess being moderated.

It’s about silencing legitimate speech that you disagree with, whether it’s on a factual level or other motivations.

Perfect example, coronavirus being manufactured in a lab rather than originating in the wild. Anyone that suggested it was from a lab has been moderated heavily until a report came out that in fact it did.

Was this not damaging to society?

And to your point of hate speech, can you give me a definition of the term? You can’t because there isn’t one that a) covers all scenarios and, more importantly, b) doesn’t cover legitimate speech.

It is up to me to think for myself, not up to someone else to do it for me.

> Why are we so hell-bent on restricting speech and burning all these engineering hours trying to moderate something that cannot be moderated?

Because if you allow humans to moderate then those humans start acquiring power. Which is something FAANG companies strive to prevent at all costs.

So instead put power in the hands of giant companies because regular people can’t be bothered to not get hot-and-bothered by reading something they don’t like? What makes these companies so morally pure?

The only way this can make sense - especially on HN - is if the people who advocate for company control of regular peoples’ lives work at these companies.

We definitely need a term for this so when we are a victim of this, we can easily raise a flag. I have a few ideas: - Bot blunder - Artificial stupidity - Algofail - Machine madness - Neural slip

Artificial Stupidity stands out to me. Nice one!


My first thought was we should have the Germans create this new word.

How about "inhuman-ops" ?

I think this has the unnatural, unjust and just basic wrongness about using AI to be like humans.

"human-ops" is the justification to remove human pattern recognition because it's better for a small group of employees (e.g. "we cant let our expensive staff view flags of distressing pictures or boring chess videos, so we will get an algorithm instead". The tech companies HR say that this is pro mental health as an additional way to justify this change and any unemployment.

'starbot', from 'star chamber' [0].

"In modern times, legal or administrative bodies with strict, arbitrary rulings, no "due process" rights to those accused, and secretive proceedings are sometimes called "star chambers" as a metaphor."

Example uses:

"I got starbotted."

"Instagram's automod is a starbot."

"YouTube is too starbotty for your lectures. Better post with your school account."

"We're suing them because their starbot took down our site right after our superbowl ad ran."

"Play Store starbotted release 14 so we cut a dupe release 15. How much will it cost to push the ad campaign back 2 weeks?"

"We use Gmail and Google Docs but not Google Cloud because of the starbots."

"I tried to put Google ads on it, but their starbot rejected the site because it doesn't have enough pages. It's a single-page JavaScript utility." (This is my true story about https://www.cloudping.info )

"Our site gets a lot of traffic but we don't use Google ads because of the starbot risk. Nobody needs that trouble."

"The STAHP Bill (Starbot Tempering by Adding Humans to Process) just passed the Senate! Big Tech is finally getting de-Kafka'ed. About f**ing time."

[0] https://en.wikipedia.org/wiki/Star_Chamber

In economics there is Goodhart's law.

'Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.'


The suggested “malgorithms” is probably the best noun form for these algorithms themselves.

As for terminology that captures society’s general over reliance on automation and algorithms to handle things that really ought to have direct human intervention, I like Rouvroy’s “algorithmic governmentality”

Have you heard of the "expert beginner"? Maybe something like that.


AS or just artificial stupidity I've heard a couple of times. It's quite mind boggling if you think about how many people had to engineer tensors and train networks for months if not years to create a system capable of so blatant stupidity.

Artificial Intelligence

In my opinion this is not just about AI. This is more general. We as humans try to fix social issues with technical measures. All the racists are not going to suddenly become good people if we push them to separate platforms.

Yes but at least you won’t be labeled as conspiracy spreading platforms by the /very important/ media, and your woke employees won’t walk out and disappear.

Oh and the current administration won’t pass legislation restricting your platforms or revoking article 230.

sorry, are you railing against freedom of association? because it sounds like you are. if you're an insufferable ass (hypothetically speaking) and no one wants to work for you, and so they do not, thats freedom baby.

AI ~ Attempted intelligence

10 might be inelegant, or not...


/ex-germanist :-)

Artifice Intelligence

Artificial stupidity.

Artful Stupidity might go down better.

There's an acronym: OOD - out of distribution - for these situations.

There's no reason YT can't detect chess from hate speech if they updated their training set. Maybe they weren't aware of this failure case, or they didn't get to fix it, or by trying to fix it they caused more false negatives. The way they assign cost to a false positive vs a false negative is also related.

Deep Neural Indifference - DNNs without emotional weights to guide behaviour and interactions. Leads to impaired empathy and lack of remorse.


Context-Free Judgement

"Cheap AI"/"cheap automation", to dispel the notion that throwing data at a neural network is high science or serious engineering. Or, even more directly, reduce it to "fuzzy matching". AI is just a fuzzy pattern database.

The problem of the flagging isn't so much of an issue, if it weren't for the fear that you don't know whether you can get an actual human being at Google to get in touch with you.

There's The Scunthorpe Problem, which seems like a specific instance.

False positive?

This is the term for it, but probably not sexy enough when I read all the other suggestions.

I would call it an "algorithmic oopsies" for the inevitable non-apology when the situation appears on the front page of a major news outlet.

> Ooopsie woopsie! Our AI made a fucky wucky and locked your account for no reason lol. We pwomise nothing pls go away and die uwu PS: If you make a new account we gonna shadowban iwt immediately lmao, if you have any compwaints please write a letter and addwess it to the hospital you were born in.

A-I-opia. Think like myopia or hyperopia.

There is such term as 'human error'. Thus suggestions:

Machine error; Machine learning error; AI error; AI fail.

The same phenomenon happens without computers when bureaucratic rules are applied senselessly.

Artificial Due Diligence

Artifactual Intelligence?

Actual Idiocy

Anti intelligence?

Artificial Stupidity?

The word is “False Positive”


AI: Artificial Ignorance

Sloppy Intelligence

Artificial Unintelligence? Artificial Stupidity?


Artificial Stupidity



I think it's just called machine learning. They're all gonna run into awkward edge cases here and there.

Tbh, so do people. Somebody whos not familiar with chess or board games could think the same thing

that makes it okay then? ignorance is strength!

It's not like humans are generally better then this. I mean look at the Github master branch fiasco. It had a completely different meaning then master with slavery connotations. Yet the outrage was so large Github changed this name. I'd say this is the same behavior of this algorithm. Seeing a word and getting "triggered" and marking it as toxic even though it has a completely different meaning.

AFAIK there was no outrage, GitHub started it.

A receiver operating characteristic curve (ROC curve) [1] describes the tradeoff between sensitivity and recall. No matter how sophisticated we think our classifiers are, the confluence of physics and mathematics will always limit the accuracy of our automated systems. It is just a matter of what kinds of errors we are willing to tolerate.

[1] https://en.wikipedia.org/wiki/Receiver_operating_characteris...

No. What we need is something that is amazingly simple:

* you, a programmer who coded something that caused company X to lose Y dollars because of a "mistake", have to be on a hook for Y dollars.

* you, a manager who managed a project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.

* you, a product owner who accepted the what the manager of ta project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.

* you, the CEO of the company that had a product owner who accepted the what the manager of ta project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.

Yes, I understand that the cost of a mistake is now higher than the loss suffered by X. That's the incentive to ensure that it does not happen because the wives or husbands or partners of the people who would now have to pay are going to ensure that they do not do take that whose wacky ideas and implement them -- wacky ideas are abstract but the loss of nice housing, nice organic food, nice daycare for the kids and nice scholarship fun is real.

Start punishing mistakes and you will soon have a hard time finding good people to employ. Only bad (and broke) ones will be available.

Funny, we hold civil engineers accountable and there seem to be plenty of capable engineers to do things that need to be done, and we hold bad pilots accountable, and yet the only reason airlines have a pilot shortage is because they refuse to pay more than $20k a year.

Developers need to stop being so allergic to accountability. It's kind of pathetic

I studied mechanical engineering myself, so I hear you.

That being said, software development and civil engineering are very different in terms of risk management. There are strict regulations around what you do and if you play by the rules, you have minimum to no risk. Even if dozens of people die under a collapsing building you are not accountable if everything you had done was by the book.

Software development, on the other hand, is more like wild west. There are minimal regulations, only best practices. One developer can never know if the end product is free of any errors.

Pilots are a completely different story. The OP was talking about "causing company X to lose Y dollars", not human lives.

> Start punishing mistakes and you will soon have a hard time finding good people to employ. Only bad (and broke) ones will be available.

Nah, the opposite. Being paid $800k/year is a great carrot. It should come with a stick so fewer people do the cruise control.

Some companies actually try to do that. They are mostly not the best ones, have liquidity problems and incompetent management.

You would have to pay CEO salaries since the responsibility lies on developers.

Since you like to invoke a climate of fear, why not add the death sentence to that list?

But ... would you then like to work in that area, if you fear every step you make, can be your last?

Do you think there would be much innovation, if the punishment costs are so out of balance?

You forgot the board and the shareholders.

What about the customers who bought/consumed that faulty product? They should also pay Y dollars, since it was their decision and responsibility to take the risk of buying the product and not vetting management, developers and the CEO before hand.

I hope this is a joke.

Well. The speech police started changing "blacklist" and "whitelist" in programming context, even when those had no racist history; maybe it's time to change it in chess. (After all, white always goes first, that is not very PC.)

Rename "black" and "white" to "second player" and "first player".

We should recolor the pieces with neutral colors such as #d3e6fa (because green and yellow have connotations).

And remove the obviously sexist Queen (why stronger than King?) and King (why should it decide when the game is over?).

Rook is shady as well.

And so on. As Aristotle said: "once we look close enough, we will find someone offended".

> We should recolor the pieces with neutral colors

If only! Clearly we need different chess skin colors so that everyone can identify with their chess pieces. I think five skin tones [0] should cover everyone.

[0] https://emojipedia.org/emoji-modifier-sequence/

need different genders for the pieces too.

What connotation does green have? Where can I find the land of green people?

Greenhorns are the new and inexperienced.

See also: green usernames here on HN.

Note: I'm serious and not serious at all here at the same time.

No: the colors doesn't matter at all. Yes: If someone wants to get offended they will find a way, even how they are held outside since no one makes jokes about them.

Exactly this. I am tired with fake official correctness, and that people get back to racist/homophobic when they are off.

I am French and we have a history (and culture) of parody, and freedom of doing it and being offended by it. This was the case when I was a kid and up until some 1( years ago.

Now criticizing or making fun of some groups is *dangerous* - you cannot publish a picture of the god of Islam without literally putting your life in danger. Yes, I know that this is a small minority yada yada yada, but I have only one head.

At the same time you can make fun of Catholics. They are not happy, but also somehow "protected" by the culture and history of France. This is similar to Jews, though they became more offendable recently.

The worst is that the people who are going to officially claim how much everyone is equal will say in private that someone is a "tapette" (I do not know the English word for that: derogatory expression for gay), or that Muslims are middle ages savages, or that Blacks are dumb (we had a few cases on TV where the invited person thought that they were off when they were not).

I do not know what the solution is - it certainly does not help the normal part of the communities above that people are afraid to put them in the same bag as Catholics (in France). They would not be officially immune of criticism/fun, but they would be perceived as "people like us".

That’s not tied to a specific race though? Sometimes being ‘green’ is an advantage cause you are able to ask inexperienced questions and given more slack when in a new job, etc.

First and second implies a hierarchy and uneven power balance. Both players should have their turns simultaneously.

I thought of this and realized it'd actually be a fun game. Both players write down their next move and then move simultaneously once both are ready. To avoid race conditions, if a player moves a piece that is to be taken by the opponent's move, then that player has saved that piece with that move. That creates an interesting dynamic where you may not wish to take the most valuable piece if you predict the other player will move out of the way.

I used to play a real-time version of this called Kung-Fu Chess. Each piece takes time to move to its destination, and when it arrives, a timer ticks down before that specific piece can be moved again. Very fun game.

https://en.wikipedia.org/wiki/Kung-Fu_Chess https://www.youtube.com/watch?v=fVob7meb83w

"To avoid race conditions", indeed!

> "To avoid race conditions", indeed!

Ha! True enough, "a good pun is its reword" ;).

How would two pieces moving to the same square be resolved? Both lost, or ... ? Regardless of the specifics, it would create a very complicated (but potentially interesting) dynamic when moving any valuable piece within a contested region.

You gamble with how many points you have left. Let's say each player starts with 40 points (most games take about 40 moves). You spend anywhere from 0 to however many points you have left to move a piece. If it's early on and there's no chance of conflict, bet 0. If it's an important piece in a crowd, spend more. If there's a conflict, whoever spent more wins that spot.

It might add an element of subterfuge to the game.

> How would two pieces moving to the same square be resolved?

I'd say that the one that has traveled the longest distance wins, and captures the piece that started closer.

(My implicit mental model is that both pieces "depart" from their home squares at the same time and travel at a constant rate, so the closer piece arrives at an empty square, while the more distant piece arrives second and captures the first-arriving piece. This may also encourage more dynamic play)

Maybe highest value price wins, with ties losing both pieces.

My son and I would play chess this way, with my wife actually moving the piece. We use a dice or piece ranking to determine ending turns on the same square depending on the actual game we’re playing.

Diplomacy works like this.

They are uneven, having the first move is a documented advantage called "initiative". Your first move as black depends on the first move that white made, typically you won't have the same response for 1. d4 and 1. e4!

Anyway, I've been looking forward to this day, where SJW start trying to change chess because it's "racist" because white have advantage of initiative and "sexist" because it makes you sacrify the queen to save the king. Looking forward to see that level of stupidity being reached!!

That’s weird, there’s a documented advantage to being the second player to move as well known as the “defender’s advantage” since you have more information to make your move. Your information includes both the board state and the knowledge of the other player’s move while the first player only has knowledge of the board state. No idea where the idea of an advantage to being first came from

Statistically in chess however there seems to be an advantage for the first mover.

You can be #1 and I'll be #A.

That sounds like segregation. Putting different players in to different sets.

Yeah, or the Mongooses, that's a good team name. "The Fighting Mongooses."

Be wise, randomize!

Why not try to be more inclusive and have each piece with its own color?

"Actually this rook identifies as bishop"

You didn't capture my king it now identifies as a queen, please stop dead-naming my queen. Funny enough chess already have has a rule that allows for pawns to become queens, touche

Relatedly, the world champion played a game where the player with the black pieces went first to make a statement about racism [1].

[1]: https://www.chess.com/news/view/moveforequality-carlsen-giri...

Understandably, it was difficult to adapt to the change: "It is difficult to change your mindset in a chess game with a different start. But if we can change our minds in the game, we can surely help people change their minds in real life."

I'm sure chess board manufacturers would love this. Think of how many new sets they could sell if everybody decided they had to replace their old black and white sets.

Eh, many wooden sets are light and dark woods. Also many checkers boards are red and black.

It’s still ridiculous -like cultural revolution level ridiculous.

At least nature will never change and nights will be black and days will be light.

> Eh, many wooden sets are light and dark woods.

Ah, but one could argue that the colours of light and dark wood are closer to the skin colour of humans culturally classified as "white" and "black" than are the actual colours white and black!

Until “they” force you to have eye implants that correct for this natural racism

"The year was 2081, and everybody was finally equal. They weren’t only equal before God and the law. They were equal every which way. Nobody was smarter than anybody else. Nobody was better looking than anybody else. Nobody was stronger or quicker than anybody else. All this equality was due to the 211th, 212th, and 213th Amendments to the Constitution, and to the unceasing vigilance of agents of the United States Handicapper General."

"In the year 2081, the 211th, 212th, and 213th amendments to the Constitution dictate that all Americans are fully equal and not allowed to be smarter, better-looking, or more physically able than anyone else. The Handicapper General's agents enforce the equality laws, forcing citizens to wear "handicaps": masks for those who are too beautiful, loud radios that disrupt thoughts inside the ears of intelligent people, and heavy weights for the strong or athletic."

Sounds like "The Giver":

> The Community lacks any color, memory, climate, or terrain, all in an effort to preserve structure, order, and a true sense of equality beyond personal individuality.


As far as boards go, most tournaments in the US have long been played on green and white boards. Here are a couple of common boards you'll see for most games in most tournaments [1][2].

[1] https://www.chessusa.com/product/vinyl-chess-board.html

[2] https://www.chessusa.com/product/silicone-chess-board.html

In Shogi aka "Japanese chess", while for some reason in English, the players are Black and White, in Japanese, they are 先手 (sente, literally first hand/before hand) and 後手 (gote, literally second hand/after hand). Black is first, BTW.

It would be easy to choose a narrative that casts the first player in the bad light as well.

Chess is a game of war, isn't it? So whoever goes first is the aggressor.

So if Black goes first, that can also play into certain racial biases people have.

Except there are actually no color difference in Shogi. Which player a piece belongs to is indicated by its direction (at least that's my layman understanding). I have no clue why English goes with Black and White.

The speech police didn't do anything. I stopped using those terms because they're not descriptive.

This is false. At least two very large organizations (Microsoft and the Department of Defense) have decreed that their employees not use "sensitive" terms like "whitelist" and "blacklist".

How about just light and dark? A lot of chessboards don't have strictly black and white colored pieces anyway.

(For the record I think the whole thing is stupid)

"Light gray" is easy to misspell as "light gay", e.g. "light gay knight takes dark gay bishop". IMHO, it's better to use "wine" (w) and "blue" (b) as colors.

If you use English rather than American, you have "light grey" which is less ambiguous.

And attack and defense to moving forward and backward. And I guess we should abolish winning and losing while we are at it

Chess already has the terminology of light squares and dark squares, so adopting that for the pieces too would add confusion and awkwardness.

For example “black’s light squared bishop” would become “light’s light squared bishop”.

It would still be light-skinned pieces vs. dark-skinned pieces.

But the pieces would still be white and black. We just need to recolor them to a color that hasn't been claimed as part of an identity... (looks at rainbow flag) well, shoot...

Infrared-coating player and Ultraviolet-coating player? Chrome player vs. Brass player? No kidding, I like this last idea.

> Chrome player vs. Brass player?

Chrome Player: sponsored by Google.

Sorry, I can't stop from posting this joke!

The only logical solution is to make all the pieces the same color.

Change the colours too. Red and neon brown.

"Night" and "Day"

What a fine mess we've created.

- Popular experiences tend to be better experiences, so we all congregate to the same services

- Homogenous user behavior leads to monopolistic situations, increasing outrage when anything goes wrong

- Even if the government doesn't try to enforce moderation, the company attempts to self-moderate to maintain its image

- The popularity of the service makes human moderation impossible, creating a need for inevitably-flawed robots

I see no solution. The only way to win is not to play.

> The popularity of the service makes human moderation impossible, creating a need for inevitably-flawed robots

That's true, but we might be able to improve things with a bit more human moderation.

For instance facebook is insanely profitable. They could probably increase their staffing for moderation by a pretty decent multiple and still be very profitable.

So the current state of moderation is not strictly a matter of need, it's also a matter of greed in terms of Facebook wanting to automate away jobs they could pay people to do. And given the state of online discourse, it's a decision we're all paying for.

Then you have the problem of bias in human reviewers. I don't think you can solve the problem of censorship with just a few more hires.

In the past for a game-modding project we sort of opensourced reports. Users (meeting certain criteria) could visit a page and view 5 seconds of video and statistics of an alleged/detected cheater. Then select positive/negative/inconclusive. If (IIRC 5 users) had voted and the majority said positive/negative then a ban/unban would be issued. Because it was random reports and the usernames were hidden, there were no obvious bias.

This works for obvious cases like cheating. Kinda like the legal system. This won't work for political speech and we knew this. Most legal systems are set up to avoid having courts judge political speech in most cases.

Even then it will probably only keeps working until there's a rift in your community. Say people start arguing over trans rights (weirdly common on discord), and then users get mass-reported and mass-voted to be banned by an activist minority.

Some communities seem to be immune to this. Mostly smaller ones of course. Some even use language that would immediately net you a twitter ban and they are still much friendlier than most popular hashtags.

It was always assumed that unnecessarily vulgar language leads to escalation and in some cases that is certainly true. But the internet netted some evidence that this isn't a general rule. One the contrary, ambitions to sanitize speech can make communities extremely toxic.

Homogenous user behavior leads to monopolistic situations,

i.e. Lots of people using a product that works for them..

There was a star trek channel on youtube which got suspended because he called the fictional race Ferengi “greedy”, which they actually are. Got reinstated after a few days. But it’s getting ridiculous now.

I wonder how The Onion gets away with https://www.youtube.com/watch?v=Q4PC8Luqiws where they suggest visualizing your money related stress as a greedy hook-nosed race of creatures who want to grabble up all your money - and only hire their own kind.

That is really well done, Onion used to be amazing.

Ferengi looks the same as فرنگی if 'g' sounds as in 'girl'.


"A foreigner, especially a British or a white person." https://en.wiktionary.org/wiki/firangi

Huh, the Thai ฝรั่ง ("farang", meaning Guava but also used to refer to white people) is also kinda similar. Looks like it has similar etymology and is not just based on the color of the fruit flesh. [0]

[0] https://en.wiktionary.org/wiki/%E0%B8%9D%E0%B8%A3%E0%B8%B1%E...

My wife is a Ferengi.

It's never been confirmed that the language of chess is the reason the channel was flagged. It's all speculation. A fishing channel being taken offline due to hate speech, for example, is a boring story. The same thing happening to a chess channel is much juicier due to the implication that an AI accidentally flagged the words "black" and "white" as racist. There are a lot of reasons to be outraged by that idea, but it's important to remember that it may not have happened.

And then it blows up here because HN has a lot of people who are disproportionately upset about default branches being called "main" instead of "master".

The amount of slippery slope argument and strawmanning on HN when it comes to topics like this is pretty embarrassingly Daily Mail-ish.

The slipper slope has already happened, we are already at the point where words that had no bad connotations and didn't offend anyone are changed. Continuing to change words like that is just pushing constant frustration and costs billions in lost hours changing and fixing old build systems, of course people push back against that!

These are problems coming from single entities trying to retroactively predict outrage. There is no grassroots activists trying to change white/black in chess. There was no grassroots actvists trying to change master/slave - it came from inside github.

Language doesn't change unless the population/community changes it. I think the bigger story here is the misuse of AI where a human moderator would've made a better decision.

> Language doesn't change unless the population/community changes it

Which may or may not include coercion?

You don't think there is a slippery slope from banning one form of speech, to another?

Agadmator, the person who made the video in question, also made a video soon after explaining the situation, and gave some hypotheses on why the video got taken down. In addition to the reason being hate speech, he suggested it may have been because they discussed Covid-19, lockdowns, etc, and YouTube was attempting to stop the spread of misinformation.

The video I’m talking about is here: https://youtu.be/KSjrYWPxsG8

I'm actually starting to think we only see these stories about absurd censorship to make the more commonplace and pernicious stuff seem legitimate by comparison. "Oh, hahah, our totally legitimate censorship ML that uses language models to isolate people from each other based on predictions of patterns in their thinking made a funny goof! Gee whiz, you got us that time!"

Anyway, Google will be fine. Lots of tech companies have managed to re-brand after getting on board with idealists, just look at Hollerith.

A friend of mine was banned for sending a chat message that said “this is a Mexican standoff” — the people in the room were both Mexican, if it matters.. we were all confused on why he was permanently banned.

I got flagged by FB for inciting violence when my sister said "Your wife is awesome, I'm going to steal her" and I replied "Haha, I'll fight you au".

At least the human moderator who handled my challenge of it was able to consider context.

Banned from what?

Because it implies the Mexican race can’t resolve disputes without violence thus normalizing xenophobia.

I think I'm more offended you referred to "the Mexican race". Mexican is not a race.

That was the attempt at sarcasm - I think it fell flat.

Perhaps a bit too close to what might be a real opinion. Doesn't hurt to add a /s in text where you can't use intonation and gestures to hint at sarcasm.

Did you miss the part where everyone involved was Mexican?

I was trying to mimic an overzealous hall monitor there and choose ignorance for effect. I don’t think it worked as the rating and the flagging indicate.

I honestly wouldn't be surprised if someone makes the argument that this automated flagging is an indicator that chess's language is inadvertently racially charged. And think about the concept of "white goes first." All it takes is a few viral tweets, and suddenly the game of chess is in the crosshairs.

In my opinion, that might be a sign that the idea of drawing abstract connections between words and concepts, that are several layers of indirection apart, may be going too far.

Yeah using language that upsets people is bad, but if you allow enough layers between words and concepts, _everything_ can be argued to be offensive for one reason or another. Or will be soon once something else becomes a hot button issue.

> using language that upsets people is bad

Shouldn't be. Intent should be the grounds for upset, intent only. Otherwise you get a Euphemism Treadmill, and that's a goddamn fucking waste of time.

This would save so much time wasting. When someone says "Can you push that to the master branch" there is nothing that ties that statement to slavery or racism so there is nothing wrong with it.

The best thing to do is to ignore these people, they will never be happy so there's no point trying to please them.

If you ignore someone who is never happy then in time they may respect you, but if you always comply then they will never respect you and their perceived power over you simply increases, which means they feel empowered to ask for more and more.

But how will tech Cos show they're woke if they can't just change the names of things and say they did something important?

It's literally a bragging point at my new Co.

Anyone else intentionally making master branches in new repos at work?

It's hilarious watching the amount of mental energy people waste on this.

> ignore these people

Unfortunately for the rest of us, they've been put in charge.

I'd say they took charge, weaponizing empathy and recruiting useful idiots and powerful cowards.


> there is nothing that ties that statement to slavery

I would say there is. That's where the language comes from. BUT I do not think that necessarily means that this is a callback or reference to enslaved _people_. Master/slave model accurately describes the model in a way that people can hear the terminology and understand what is happening without knowing details. It doesn't condone/promote slavery, it doesn't have anything to do with enslaving people, and in no way does it even encourage such behavior.

Context matters, a lot. And to be honest I didn't hear anyone complaining about this (and I live in an extremely liberal place), so it came off (to me) as a grab for social currency (_especially_ since GitHub didn't use the term "slave". GitHub was using "master" in the sense of "main" or "principle" and so I didn't get even understand). If you try hard enough you can make anything reference race, but at the end of the day what really matters is context and how people perceive things. If no one (or realistically few people, because there are people looking to make issues) are making these connections AND no one is being harmed, then we shouldn't really be worried about it.

Edit: wanted to make clearer that I'm talking about two different terms of the usage "master". One from master/slave model[0] and one that GitHub uses (which uses the adjective version of the word which means "main" or "principle"). And that the GitHub version does not reference the former version. I know we're talking contentious subjects here but I'm trying my best to convey what is in my head. I'm open to new opinions but bear with me. Language is complicated.

[0] https://en.wikipedia.org/wiki/Master/slave_(technology)

> Master/slave model accurately describes the model in a way that people can hear the terminology and understand what is happening without knowing details.

Except it has absolutely nothing to do with slavery at all. The master branch is akin to the master record, that holds the true and complete copy. A master branch evokes mastery of a subject, like a Masters degree.

Except now, everyone just submits to the idea that the word master only has context in master/slave terminology.

Yeah so there is a master/slave model but I agree that that's not what GitHub was using. I tried to clarify this with my parenthetical statement but apparently did not do so sufficiently. Any suggestions of how to edit?

> A master branch evokes mastery of a subject, like a Masters degree.

I would actually say that that a Master degree is using a different definition (though both part of the adjective usage). For a master branch (or master document, master copy) I'd say it is the definition that is "main" or "principal". Whereas a Masters degree is having mastery over something, which is akin to high proficiency.

Master/slave model has nothing to do with git's master which is "master copy" like as in record mastering. What you say is valid for SCSI or whate et though.

Sorry I was trying to convey that. But language is often messy. I completely agree with you. I thought my parenthetical mention of Github clarified this but I guess I wasn't clear. I'm not sure how to edit to resolve. Any suggestions?

You're wrong - etymology is via bitkeeper which has slave repositories and master repos.

https://mail.gnome.org/archives/desktop-devel-list/2020-June... for a description of what happened; tldr it isn't.

That link doesn't say anything contrary to what I said, and indeed includes a link to a twitter thread saying that Bitkeeper may very well be the origin of the 'master' terminology in git.

> how people perceive things.

ah, but which people? the most reasonable and intelligent people? the traumatized and most sensitive people? The former I'd say yet, the latter I'd say go get some therapy and keep your trauma to yourself, it shouldn't drive the larger discussion.

> Language is complicated.

Sure, it's organic, but there's no god-of-the-gaps in there. It is knowable. A language isn't as complex as a human brain. Nor is it living, it can be decomposed at will.

Linguists have been working like gang busters to iron it all out. Steven Pinker, Noam Chomsky, et al...

It reminds me of all the synonyms and symbols, homophones and homographs Chinese use when referring to mr Xi on Weibo, etc., including the now censored Winnie the Pooh. To get around the censorship which blocks mentioning of me Xi in unflattering light. It’s ever evolving to keep ahead of the censors.

I've never got why people don't understand this. Judging people by their intent resolves a lot of ambiguity, for both political sides.

If someone gets up infront of a room of 1000 people and says "hey guys!", clearly intending to reference the entire audience, they're not being fucking sexist.

And if you're standing there with 5 racists showing a pepe flag or an ok sign or whatever random garbage it is this week, then you're a fucking racist.

There's absolutely no need for nuance or ambiguity in either case.

You can argue the toss all you want about how to determine whether someone is intending to do something (it's not like you can read minds so that can obviously have a ton of complexity to it), but if you're starting from a position that people can unintentionally do something offensive then everything that follows is just pointless bullshit. You've just built a trump card for both sides into the argument so they can just scream past each other like morons without contributing anything meaningful.

> intent only

Communication is a multiple party activity. It's not just a speaker and a speaker's intent. The recipient and how it's received absolutely matter (and should). I've said plenty of things I didn't intend to be stupid. Still stupid.

Is it ok for the Washington football team to be the Redskins because no current fan or owner intends to be using a racist name?

It's not only the hearer getting upset that matters either. There's room for error and for grace and tact on both sides of a conversation. But it's definitely not just intent only. Humans don't work like that. Hell, even computers don't work like that.

Yeah, this “intent is the only thing that matters” mindset is a naive perspective on communication. People like to act as though there's some liberal bogeyman reaching for social currency by acting “woke”, when what has generally happened is someone was thoughtless/inconsiderate and an offended party spoke up (this whole experience was, of course, quite traumatizing for the thoughtless/inconsiderate person).

Text is text, and you can't encode intent without assuming that the reader has a similar level of internet experience to be able to pull such hidden intent using context clues.

It's sad we have to go so far down rabbit holes before asking basic questions fundamental to the alleged 'solutions'.

> Intent should be the grounds for upset, intent only.

People can be unintentionally upset. I'm sure we can all think of a time when we've accidentally upset someone. I still try not to do it.

We don't have to force people to change their language. It will happen over generations as we discuss these topics.

> intent only.

I disagree. Intent matters, a lot, and you're right, but it isn't the only thing. Right now I think we fall on the other side, that reception is all that matters (in the bias training I receive they specifically mention that it is 100% reception and not intent). I believe the law works on reception because that's easier to quantify. Intent is very tricky. You can do something that most people would consider wrong and just say "well I didn't mean it that way." (the inverse can happen too, but less people are likely to start a legal case out of spite compared to people defending themselves. It is tricky)

I believe that there is a middle ground somewhere. Where that is I'm not sure and I think we need to work together as a society to figure that out. I think somewhere in there there is a "reasonable" set of norms, and we have other laws to suggest that we can use this as a basis. But even this can be tricky as there are many different cultural norms and customs. It isn't even just ethnic customs. In America we have very different regional customs that often butt heads. I think we need to recognize that people are different and operate based on different values and often this is fine.

But I think a big thing we've lost in our current standing is good faith. There's three parts to any form of communication. 1) The idea that is within one's head that they are trying to convey to the other person. 2) The words, body language, inflection, etc that are used to codify this idea (aka: encoding). 3) The understanding of that language that was used to convey the idea (aka: decoding). Humans are pretty good encoders and decoders (we wouldn't have made it here where we are if we weren't) but there are limitations. Language is extremely messy and we often don't think it is because we're so used to it. But you can look at words being used today and you'll often find that people are talking past one another because they are using different definitions of the same word and actively refuse to interpret the other person's intended message (as an example, every internet conversation about capitalism/socialism/communism). The point of communication is to pass one idea from one head to another head. It requires understanding that there are these three components. If we do not act in good faith then we cannot communicate. With that knowledge it suggests there are two different actions to take if one wants to act in good faith. The communicator should try to encode their thoughts as best as they can, attempting to understand their audience (aka: speak to your audience). BUT we often forget that the listener's job is to decode, to do their best to determine the idea that the communicator is trying to convey (aka: __intent__). In fights we will say "but you said..." even knowing what was intended as a way to win. This is not in good faith but is so prevalent.

When conversations are about mic drops and one upping another person, communication cannot be had.

To you point about intent vs reception, I think the way the law works is actually more along the lines of "how a reasonable person might receive this". Which is perhaps harder to quantify, but IMO strikes a good balance. However I totally agree with your point about how some communication has become more about scoring points than having an empathetic and thoughtful dialogue

thinks the case of yelling fire in a movie theater jokingly

Yelling fire in a movie theater is perfectly legal and protected speech, even if false. It's only illegal if what is said is "likely to incite imminent lawless action," like a riot.


It's funny, this is often presented as a supporting example to limit citizens' free speech or other rights, typically paired with something along the lines of, "no freedom is limitless." Of course many people don't realize this example is a) outdated and currently false, and b) the argument used against citizens speaking out against WWI using the Espionage Act of 1917, which is considered by many to be one of the worst and oppressive laws to our rights ever written.


You may recognize the names of some of the act's victims:

Among those charged with offenses under the Act are German-American socialist congressman and newspaper editor Victor L. Berger, labor leader and five-time Socialist Party of America candidate, Eugene V. Debs, anarchists Emma Goldman and Alexander Berkman, former Watch Tower Bible & Tract Society president Joseph Franklin Rutherford, communists Julius and Ethel Rosenberg, Pentagon Papers whistleblower Daniel Ellsberg, Cablegate whistleblower Chelsea Manning, WikiLeaks founder Julian Assange, Defense Intelligence Agency employee Henry Kyle Frese, and National Security Agency (NSA) contractor and whistleblower Edward Snowden.

Surely though if you cause panic and people get trampled you would face consequences no? You probably won’t get accused of hate speech but I hope it could go up to manslaughter.

Do you think that the people who panic would bear some responsibility? If someone yelled fire in a movie theater that I was attending, I would at least look around and smell for it before I started flipping out trampling people.

Let's use another hypothetical just for fun. Let's say I called someone a bad name and they punched me, would I bear the legal responsibility for their lack of self control? I would think not. I would think the person who threw the punch would be charged for assault and I would be charged with nothing. If that's the case, how would that be different from the fire example? Someone spoke and someone reacted with no self control. I would think the person who reacted with no self control would bear at least some responsibility.

I think it's safe to assume it could go up to outright murder if intent could be demonstrated and someone actually died as a direct result.

Thanks for this. I think it is shocking that so many people still use this.

What about yelling bomb on an airplane?

I'm not sure that has been tested in court yet. If based on precedent, I would assume it would also be legal, but it would take a brave soul with a lot of money and free time to find out for certain.

SCOTUS could always overturn the existing precedent but if we assume they won't then it's legal right up until someone gets injured as a direct result. (Unless it somehow ends up running afoul of our ridiculous obscenity laws? I doubt it but you never know.)

Agreed that anyone who decides to demonstrate the above is definitely going to want plenty of money and free time at their disposal.

I remember many years ago when colour schemes/UI themes were still called "skins", and forum discussions about them often yielded amusing racist-if-taken-out-of-context sentences like "do you like white or black skin" and "I have dark skin, but I prefer the white skin." Not a single person was offended or outraged, everyone saw the racial associations but clearly understood the context and was more amused than anything else.

I'm of mixed opinion whether people were actually more intelligent or level-headed back then, or whether the current "ultra-PC/SJW-ism" trend actually started as a joke that got taken too far and adopted as truth by the gullible.

I have no knowledge of the example situation you provided (I don’t recall any such jokes about software skins), but consider the possibility that in some cases where “back in the day we did it and no one was offended” it was in fact the case that people who were offended weren’t welcome or weren’t able to voice their opinions.

I think they're talking about forums, where most phpbb forums back then offered a theme selector to the user, with 'dark' and 'light' being some common names.

I’m very familiar with skins, Winamp skins being the archetypal example for me. I meant that I don’t remember any such jokes deliberately conflating software skins and human skin tone.

If you would actually like to know the history of the current trend, read Cynical Theories by Helen Pluckrose and James Lindsay

> whether the current "ultra-PC/SJW-ism" trend actually started as a joke that got taken too far and adopted as truth by the gullible

It started with a few German philosophers and social theorists in the Western European Marxist tradition known as the Frankfurt School in the interwar period.

> Deep SJW lore

I think you're actually correct, although the roots go back further. People don't understand the historical academic context of modern intolerance.

Jokes that evoke racist hatred are not good, though. You don't actually know that NOBODY - literally you said that not a single person - was offended.

Also, so what if someone was offended? Isn't that mostly irrelevant to this debate? The goal isn't to stop people from offending others, the goal of changing our speech is to reduce the unknown harm that words can do re: normalization of hatred of minorities. The 'jokes' you describe aren't funny and do in fact have a potential to cause real harm in the world.

I would posit that unchecked hatred towards minorities online for decades is one of the reasons we are in this 'mess' of language today.

> The 'jokes' you describe aren't funny

That's an opinion, and it's not a supportable opinion, because we don't really know much about the jokes. Parent's comment wasn't intended to convey the material faithfully-- just a bare description. We don't know the exact wording, the delivery, the timing, or any other context. Maybe they were lame (another opionion), but it's also possible that I (or even you) would have gotten a chuckle out it.

Regardless, there's no reason to assume that humor of this nature serves to normalize racial hatred. But if you assume the worst of people, you're certainly more likely to get it in return.

> Regardless, there's no reason to assume that humor of this nature serves to normalize racial hatred.

Yes, there is. This is a very well-researched topic.




> The results were very clear. Subjects that held anti-homosexual views supported significantly higher cuts for the gay and lesbian organization after they were exposed to anti-gay humor, compared to subjects who were not prejudiced against gays and lesbians who were exposed to the same jokes.

So, let me rephrase that

> after hearing jokes featuring homosexuals, the anti-homosexuals (however those where determined and chosen for the study) where anti-homosexual. The not-anti-homosexuals where not anti-homosexual ater hearing jokes featuring homosexuals

If anything, that link disproves your own point.

That doesn't disprove my point at all.

How excactly? The anti-homosexual people apparently did not change, while the normal people also did not change. The study thus proofed that the presence of the jokes is moot, no?

Ah yes a progressive claim backed up by psychology papers. A field currently drowning in a reproducibility crisis, and a group who believe that lying and slander is not only okay but should be actively utilised in every goal they pursue.

Yes I think that one can be dismissed.

I'm sorry, are you dismissing all psychology papers?

> A field currently drowning in a reproducibility crisis

My peers have told me that chemistry and biology also suffer from results that are difficult to reproduce, and I've certainly read a number of articles here that decry the lack of reproducibility in computer science too.

> a group who believe that lying and slander is not only okay but should be actively utilised in every goal they pursue.

I'll be honest, I'm not really sure what this is in reference to. If it's in relation to psychology experimental methods, then I believe you're incorrect. Methods that involve actively deceiving subjects would be rejected by ethics boards (at least, it would in the UK). On top of that, there are many papers that do not use observations of human behaviour, and so would not find use in lying to them - for example, many neuropsychology papers discuss the physical makeup of body parts.

Psychology has been around for a long time, and some psychology results have deeply influenced society. Some of these papers cover the placebo effect, and various mental health conditions. If you are dismissing all psychology papers, do you also reject these influential papers?

I'm sorry if this reply is a bit full-on, but dismissing a claim's provided evidence by dismissing an entire academic field seems a bit extreme to me.

> I'm sorry, are you dismissing all psychology papers?

Following the reproducibility crisis they can't be trusted on face value. When used to promote SJW and progressivist causes they can be almost certainly dismissed.

> I'll be honest, I'm not really sure what this is in reference to

That was in reference to progressivism, hence why I stated that in the comment.

> Ah yes a progressive claim backed up by psychology papers.

I'd like everyone to stop speaking and stop writing anytthing because it MIGHT offend someone. /s

You know that many people are advocating for replacing the terms whitelist and blacklist, right?

I don't see how chess is any different.

Chess is different in that the pieces are literally black and white in their color.

Blacklist and whitelist are used as linguistic symbols: black==bad white==good.

That is pretty different to me.

I'm no expert, but I thought the black and white thing originates really from night and day. It is easier to see when there is light (often perceived as white) than it is at night (often perceived as black). We used white and black to convey the color of the sky. A white color reflects light while a black color absorbs light. This is how I've always thought about it. I never associated this with skin color until someone told me. I still have never internalized this because it just doesn't make sense to me.

I'm open to being wrong but to be this connection of archetypal meanings and skin color is a stretch. I don't look at a white phone or black phone and think good or bad (in fact I have a black phone and prefer dark colors while my skin tone is the opposite). One which requires a lot of fundamental change in language and how we think. Because I'm sure I'm not the only one that has codified this representation in my mind. And most of us should understand archetypes are not how you go about judging the world or people. I don't see a person dressed in red and think "angry" (which would be a different emotion in a different culture), or yellow and "happy". I just see colors.

That is pretty different to me.

The lists are also referring to the concepts of literal black and literal white, not the skin colors.

So allowed items were written white ink on black paper and disallowed items were written black ink on white paper? Or the other way round maybe?

Star Wars - “come to the dark side”. Lord of the Rings, Sauron is the “dark lord”. The Dark Ages vs. the Age of Enlightenment. Yin and Yang. I don’t believe all these authors were racist. “Roughly 40% of Americans claim that they would be afraid to walk within 1 mile of their homes at night… 54% of all participants rated the dark within their top five fears”

Perhaps this would be solved if we used different words to describe skin tone than we did light. If “white skin” was called “wumbo skin” and “black skin” called “mumbo skin” it would be more clear the etymology of which terms were referring to day vs night rather than skin tone.




Try not to apply that logic to skin color please.

You're right it isn't different, it was a stupid idea with firewalls and it's a stupid idea with chess sets.

I think if we let racists own the color wheel then we've lost.

Because the pieces really are white and black, or light and dark. The list is not black.

But they are not naturally so. We decided to make them that way.

And what? should i rename my black desk or black chair or better, get a new colored one, because "someone decided to call them that way"?

But literally anything can upset anyone. You can never satisfy everyone at the same time.

This was in my opinion already readily apparent the instant that whitelist/blacklist came under fire.

> Yeah using language that upsets people is bad

There are people who get upset about the language used to communicate results of scientific studies proving the efficiency of vaccines against the ongoing pandemic. Kids get upset at the language “No” even when uttered to tell them that they can’t go out and play with a chainsaw, purely for their own protection.

You cannot determine if language or language usage is bad purely from the response of others even if they get extremely upset about it.

“Political correctness: is communist propaganda writ small. In my study of communist societies, I came to the conclusion that the purpose of communist propaganda was not to persuade or convince, not to inform, but to humiliate; and therefore, the less it corresponded to reality the better. When people are forced to remain silent when they are being told the most obvious lies, or even worse when they are forced to repeat the lies themselves, they lose once and for all their sense of probity. To assent to obvious lies is in some small way to become evil oneself. One's standing to resist anything is thus eroded, and even destroyed. A society of emasculated liars is easy to control. I think if you examine political correctness, it has the same effect and is intended to.” ― Theodore Dalrymple

Or cooler heads prevail like at the Académie Française who recognize that sexual genders are completely unrelated to grammatical genders despite what activists try to say.

So we may just get some people who push back and tell people that chess isn’t racist and it’s people who are injecting race where it doesn’t exist (such as here in chess) who are the problem.

Have cooler heads prevailed in this regard? “Progressive” Americans degendering Spanish by referring to Latino people as “Latinx” seems to be going as strong as ever, despite the protests of actual native Spanish speakers. In their haste to appear progressive, people who say “Latinx” are ironically engaging in linguistic colonialism, as it were.

But that’s the problem with progressives. They trip over themselves trying to be at the front. And yes, I’ve asked people of Latin descent if they use latinx in their speech to which they respond no and that it’s a North American invention and that in Spanish it’s Latino for sing male, Latinos for plural males or combo males and females, and Latina for singular female and latinas for all female but never latinx for any combination of the above.

> Americans degendering Spanish by referring to Latino people as “Latinx”

Depends, do you speak Spanish? If so, there's a governing body - the Real Academia Española (RAE) - and they have referred to the "x" ending as an abomination. It is rejected from the style guide and not acceptable Spanish.

If you want to speak Woke Proto-Spanish, by all means do. Just realize it's not Spanish and it's spoken by a tiny fraction of a percent of - generally American woke-sters desperate to cling to Latin or Spanish culture as they realize they are actually American and as such - not oppressed minorities (the worst of fates!). This is why Oxford recognizes "Latinx" but the RAE does not.

That’s a perfect example of something that literally every single time I’ve seen it mentioned was in the context of people expressing outrage at other people’s activism, and never in the context of an activist actually advocating for it.

Do you mean that it's an imagined problem or that it's an example of a truly terrible idea?

Because I see people using it all the time on TV. It's not an imagined problem.

Read the literature from your nearest HR department and there is a good chance you'll see this term being used in earnest.

I've heard people actually, earnestly, use it. It was high school students, though, so I cut them some slack on the rope of pretentious foolishness. We were all there to some degree when we were teenagers.

I've heard several PhDs use it. They were white English speakers and liberal in their political leanings. It comes across as even more pretentious than high schoolers aping the latest wokeness.

Related: Referring to American Indians as 'Native Americans', which is often seen as over-inclusive by American Indians themselves since it implies you're talking about Natives to the entire North and South America. While not the worst thing, when you are specifically talking about the native tribes the United States pushed out and forcibly moved to reservations, the term 'Indian' is codified in law[0] and is what the group themselves embraced as their identity so that, as a whole, they could bargain with the United States government to obtain compensation for the tragedies endured.


0: https://www.bia.gov/

I knew you linked to that CGP grey video before I clicked since it’s literally the only place I’ve ever seen that claim made.

I find it dubious, since there are many Indians, and some tribes have taken the stance that they don’t like that term.

The problem seems to stem from 'American' being synonymous with the United States, when in a literal sense it means the entire North and South America continents. People will probably know what you mean with context but it can be confusing, so adding on 'Native American' just requires more explaining whenever you bring it up when not among peers.

This is a good point, but I'd also be interested in seeing the opinion of Americans with heritage from India, since using 'Indian' to refer to Native Americans might inconvenience them.

Indian-American: An American citizen whose family came from India.

American Indian: same as Native American.

It's none of my business but personally I prefer latine[1]. IMO there's no need for white English speakers to tell Spanish speakers their language. We're all on the journey to a world with more than two genders together. Spanish speakers will figure out their own path to inclusivity.

1: https://www.vox.com/the-highlight/2019/10/15/20914347/latin-...

Latino is already inclusive. There's nothing to be done, as Latinos aren't preoccupied with creating fake problems.

> referring to Latino people as “Latinx”

Isn't this connecting a Latin conjugation? Which in turn would be westernization? I understood westernizing people to not be the right thing to do. (Which to be fair, Spanish does originate from Europe but Latin people are not). I never understood this. If someone has a good explanation I'd love to hear.

Isn't Latinx supposed to be Latino+Latina? Surely those two words areactually gendered (in the biological sex way), unlike most words which are gendered in a purely linguistic way.

Latinos is how you gender Latino+Latina in a "purely linguistic way", but some people don't like it, so they made a new word. The masculine word is either gender neutral or "truly" masculine depending on the context, but the feminine counter part always refer to girls/women.

> despite the protests of actual native Spanish speakers.

The only people I've ever seen mad about "Latinx" were American internet free speech advocates.

> Or cooler heads prevail like at the Académie Française who recognize that sexual genders are completely unrelated to grammatical genders despite what activists try to say.

But that's not really true. I always learned that, for example, ils (grammar-masculine they) should be used when referring to a group of people where any of the people are sexual-gender-masculine, but elles (grammar-feminine they) should be used when referring to a group of sexual-gender-feminine people. Ils and elles have the same rules when referring to a group of inanimate objects depending on the grammar-gender of the objects.

You're both right. In grammatically gendered languages, various situations and context are present. Sometimes, people get worked up on a non-issue (like the latinx example other commented). Other rules have a more debatable impact, like the famous "in groups, the masculine prevail".

Interestingly, other approaches existed in the past like the rule of proximity where the gender of the closest element will dictate how the verb and adjectives will be written.

Languages are an ever-changing thing. I think it's healthy to propose and discuss grammatical changes if it makes sense, but everyone should be aware of what they are actually talking about.

In Germany we had the same. That didn't stop most newspapers to use some form of weird gendering of language. I think it will fade out since people don't use it.

It also underscores why some people think the media is a partisan mess. It is to some degree at least. They even asked people and most didn't like it. Didn't stop them.

Already done:

> In 2019, Magnus Carlsen and Anish Giri – who as of July were the number 1 and number 10 players in the world, respectively – promoted a #MoveforEquality campaign as a way of acknowledging social inequalities. In their game, black moved first and the line was, “We broke a rule in chess today, to change minds tomorrow.” It was billed as an anti-racist statement, but some took it as a suggestion to change the rules of chess to black having the first move.


I wouldn't be surprised if the Google moderator AI becomes the source of truth on what is offensive. If google doesn't delete it than clearly it is ok. If google does delete it then it is offensive regardless of anything else.

Or it will at least become a cheap barometer used by journalists: Materials so offensive that they are automatically rejected by all major social networks.

That's the scariest worst idea I've read all day.

Others have pointed out it's been done -- so it will continue to be done again and again until something gives. But I'd like to point out at least Go is safe for now, since black goes first! (However, white is used by the stronger player when not doing nigiri or playing a handicap game... And I'm sure some artificial drama could be manufactured based on which color you want to give draws to by giving or taking 0.5 from the perfect komi of 7. There's no safe space.)

This happened last year. The Australian Broadcasting Corp (ABC) hosted a discussion about this[1]

[1] https://www.news.com.au/sport/more-sports/john-adams-slams-a...

Or what actually happened was the radio show asked if white going first was racially based, concluding that it was not. But conservative media spent days getting themselves outraged over it before it even aired.


OP was talking about someone making the argument, not that it would be affirmed. That's what I was referencing, fwiw

You would be interested in this, starring Magnus: https://www.youtube.com/watch?v=VPFI3-W8Fqo

Yes, not too dissimilar to Github changing the branch "master". Is there a list of things like this that match this pattern that would be easy for people to go after given a few viral tweets? I feel like if there is such a list, it'd be less shocking when the inevitable happens.

Real estate is moving away from Master bedroom/bath/suite

Just state that white always starts wars and you might be fine.

This is technically appeasing racists though.

It's clearly racist that White plays first.

White goes first because black was considered a lucky color. So if black went first it would have double advantage, from being first and from being the lucky color.

I think you just made that up.

No, I didn't: apparently the above was proposed by chess author George Walker. White didn't consistently start first until around 1860.


It'd be still racist if black startrd first. Its not an argument...

So that would mean Go is racist for black always going first?

Chess is not only racist but also sexist. How come the king is the most important piece on the board but the queen is completely expendable?! And, for goodness sake, the game features actual white knights.

The Queen is a significantly better piece. The King needs to be protected and is borderline unusable until end game, where the Queen is the most powerful piece from start to finish. This is so evident that in higher levels of play, people just resign when they lose their Queen.

If by higher you mean 1000 ELO then sure. In actual tournament play it's very rare for someone to blunder a queen and resign. Queen exchanges happen in most games and queen sacrifices are fairly common. There are no king exchanges or sacrifices.

In higher levels of play people rarely just “lose” their queen.

Can't tell if sarcasm.

Sometimes people can't tell with Titania McGrath, a masterclass in satire.


You should always be careful with satire, Poe’s law and all. Funny stuff though.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact