is used by Private Eye (British satirical/political magazine) to highlight adverts auto-generated inappropriately to accompany articles on news etc websites.
I don't like the implication that it's the algorithm that's malicious rather than the person who wrote the algorithm (no algorithm is inherently malevolent or benevolent in my opinion, it's just an algorithm), but I also know this distinction is completely pointless for the vast majority of people and "malgorithm" gets the gist of what people are getting at across very well.
Perhaps an elucidating counterpoint is an algorithm written in such a manner that it is deliberately worthless, as a joke (e.g., StackSort, Bogosort). Obviously they're not the result of a bad programmer, they're just an inherently bad algorithm.
I actually do though, and I say that as a musician myself. These things are inherently subjective, writing good music isn't just a matter of how closely the musician adheres to a pre-determined set of rules. The qualities of goodness and badness exist in the minds of the creators and the audience rather than being attached to the music itself in some sense. Any attempt to classify "good" versus "bad" music in the sense people usually understand it is just an appeal to authority fallacy, the only thing that makes music good is "do I personally enjoy listening to it or not?". You can try to classify music based on how closely it fits a genre's set of rules but this quickly breaks down into absurdity in practice (for example, acts like the Grateful Dead which span many genres).
That subjective information which describes the badness of a smell doesn't exist within the smell itself, it exists within the mind.
So "Malgorithms" it an accurate one.
Merriam-Webster online says: "a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation" and "broadly : a step-by-step procedure for solving a problem or accomplishing some end".
And so they always do. The problem that an algorithm solves is defined by the algorithm itself. The purpose of a system is what it does.
Algorithms are useful to solve problems for us - the trick is always in making sure that the problem the algorithm solves is the one we want.
 - https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...
However, it doesn't cover other cases for other AI mistakes you mentioned, like self-driving.
I think so-so automation often is used places where there's a lot of zero-sum conflict between workers and management, or where the work itself causes a lot of negative human externalities. (This can be a good thing: it's probably okay to settle for a "worse result" from an automated system if it eliminates a lot of physical or psychological harm to people...some content moderation issues probably fall under this case, but not this one.)
It enabled things like Facebook to displace message boards for the most part. Look at Facebook or Reddit, they are barely able to police obvious noxious behavior in English, and military juntas in Myanmar are able to organize on the platforms.
AI has proven to be terrible at moderating human content, we felt we needed a word for these kinds of buggy systems and have chosen: "
And GPT-J answered:
Which seems to capture the spirit of unreliable AIs pretty well
"Thus, After having thus successively taken each member of the community in its powerful grasp and fashioned him at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided; men are seldom forced by it to act, but they are constantly restrained from acting. Such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to nothing better than a flock of timid and industrious animals, of which the government is the shepherd."
It's not a human rights violation if it isn't a human doing it!
It's not just something that happens to us, it's something we do. We need to do better rather than accepting it as inevitable, and let future generations worry about what the best name for it was.
> Maybe there's a ten syllable German word that expresses it perfectly?
That being said, I propose Urteilsfähigkeitsauslagerungsnekrose: The necrosis that follows the outsourcing of our capability of judgement.
May I suggest “yacht-feet automation”? It focuses on increasing the yacht feet of Zuckerberg, and doesn’t do anything else right?
Also 11 sylabels.
But English is always a bit awkward for this sort of thing.
"Blacklist" was banned as term because it was deemed racist. No matter that people understand black and white outside the race issue.
So if blacklist is deemed by people , not technology, as racist; thus removing context from the equation, why is AI wrong to assume that "attack the black soldier in C4" is hate speech?
Human interactions have a social/cultural context. You can't just recognise $thing, you have to recognise how $thing depends on $context for the correct interpretation.
Current AI either ignores context completely or doesn't parse it correctly.
It's a rediscovery of the ancient "Time flies like an arrow, fruit flies like a banana" problem.
If you're out on a date and you say "Let's go back to mine" it implies one thing. If you say it to some friends after an evening out it usually means something completely different.
Sometimes it means the same thing - but you need to know a lot about the people involved to be able to infer that accurately.
And sometimes humans can't parse these nuances accurately either.
AI-by-grep or stat bucket can't handle them at all, because the inferences are contextual and specific to situations and/or individuals. They can't be extracted from just the words themselves.
Minsky & co researched some of this in the 70s, and eventually it motivated the semantic web people. But it was too hard a problem for the technology of the time. Now it seems somewhat forgotten.
"The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term"
call it maybe philosophy-lacking, or worldview-lacking, but understanding how things 'hang together' in broad terms is precisely what our 'intelligent' systems cannot do. Agents in the world have an integrated point of view, they're not assemblages of models, and there seems to be very little interest than to build anything but the latter.
In the natural world we would call that instinct. So maybe 'artificial instinct' if we want to keep 'AI' or 'synthetic instinct' because I think it sounds better.
"I was using a ballpeen hammer to pound in roofing nails, everything was going great until the hammer's 'synthetic instinct' resulted in a painful blow to my groin and a near fatal three story drop. It'll go better next time - I've painted the ballpeen hammer a different color."
- incompetent system
- incompetent robot
- inept system
- inept robot
- careless system
- careless robot
- negligent robot
I like 'neglibot'. "YouTube neglibotted my video." "This is my new email address. My old one got neglibotted." "The app works for typing in data, but the camera feature is neglibotty." "They spent a lot of effort and turned their neglibot automod into carefulbot. In another year it may be meticubot.
sounds like “artificial intelligence” but really captures the fact that it’s just bad to a harmful extent at what it’s supposedly trying to do.
I can't help but wonder, what the world would look like if services in the future will be provided primarily by AIs. How do we adopt them? Do we have to invent a "New Speech" just to make AI understand us better so we can live a easier life?
"Question: Have you consumed your food today?", "Answer: I have consumed my food today."
Or a more subtle example:
"Hi! Welcome to ___. What do you want to eat today?", "I want to eat item one, fifty-six, eighty-seven and ninety-one", "Do you want to eat item one, fifty-six, eighty-seven and ninety-one?", "Yes", "Please make a seat. The food will come right away!" (all without making any Hmm or Err noise)
this already happens today, with human servers. The menu items are numbered, and you tell them the number instead of the name of the dish.
I already have started add a hard 'ng' on 'going to' because my phone keeps abbreviating it to 'gonna'.
Example is that people always hear 50 when I say 30, because of how I pounce the "y". Anything ending in "th" or "ing" gets confused a lot by people who don't know me.
It's kinda funny to keep track of :)
Did you succumb to the exact same problem GP comment described?
If you want a vision of the future, imagine a man repeating "Hey Thermostat! Can you cool it down in here a little?" over and over–forever.
It's already happened. Have you ever seen an a grandparent unfamiliar with new tech talk to Alexa? There's a ton of them on Youtube.
However this is different from bureaucracy in the level of automation and statistical inference. A bureaucratic system doesn’t do inference (or at least that is not its main function), and the steps between require human inputs (albeit really automated humans most of the time).
I suggest automatacracy which strings together automation and bureaucracy.
Composite noun of verschlimmbessern  and Automatisierung (automation).
We are often reduced to mere conformity to what the artificial intelligence can make sense of.
Captcha: "Are you human?"
Human: <Goes on to do a simple perceptual task even a cat could do if they had fingers>
I used to live in a communist country (probably the same one that Mr. Radic lives/lived in), where "hate speech" -- which was called anti-state, or anti-establishment speech back then -- was punishable by a middle-of-the-night visit by dark-clad police. Yet, people still found ways to openly criticize the Party, and there were even popular songs that openly defied them, which only demonstrates that not even human pattern recognition is good enough to detect these things, as you state in your first sentence.
It's a waste of time, but more importantly it is detrimental to society.
Misinformation spreads like wildfire.
The notion that good speech counters bad speech relies on rational, well informed, critical thinking skills that are lacking in a significant portion of the populace.
It is fundamentally different that a company wants to exercise its right to free speech in choosing what is said on its platform than for a government to use free speech as a fig leaf cover for dissent.
Putting up no fight against misinformation and hate speech is not a winning strategy. Society loses much more from nonsense spreading than it does from having to wait 24 hours for high quality chess content to get remoderated.
It’s about silencing legitimate speech that you disagree with, whether it’s on a factual level or other motivations.
Perfect example, coronavirus being manufactured in a lab rather than originating in the wild. Anyone that suggested it was from a lab has been moderated heavily until a report came out that in fact it did.
Was this not damaging to society?
And to your point of hate speech, can you give me a definition of the term? You can’t because there isn’t one that a) covers all scenarios and, more importantly, b) doesn’t cover legitimate speech.
It is up to me to think for myself, not up to someone else to do it for me.
Because if you allow humans to moderate then those humans start acquiring power. Which is something FAANG companies strive to prevent at all costs.
The only way this can make sense - especially on HN - is if the people who advocate for company control of regular peoples’ lives work at these companies.
I think this has the unnatural, unjust and just basic wrongness about using AI to be like humans.
"human-ops" is the justification to remove human pattern recognition because it's better for a small group of employees (e.g. "we cant let our expensive staff view flags of distressing pictures or boring chess videos, so we will get an algorithm instead". The tech companies HR say that this is pro mental health as an additional way to justify this change and any unemployment.
"In modern times, legal or administrative bodies with strict, arbitrary rulings, no "due process" rights to those accused, and secretive proceedings are sometimes called "star chambers" as a metaphor."
"I got starbotted."
"Instagram's automod is a starbot."
"YouTube is too starbotty for your lectures. Better post with your school account."
"We're suing them because their starbot took down our site right after our superbowl ad ran."
"Play Store starbotted release 14 so we cut a dupe release 15. How much will it cost to push the ad campaign back 2 weeks?"
"We use Gmail and Google Docs but not Google Cloud because of the starbots."
"Our site gets a lot of traffic but we don't use Google ads because of the starbot risk. Nobody needs that trouble."
"The STAHP Bill (Starbot Tempering by Adding Humans to Process) just passed the Senate! Big Tech is finally getting de-Kafka'ed. About f**ing time."
'Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.'
As for terminology that captures society’s general over reliance on automation and algorithms to handle things that really ought to have direct human intervention, I like Rouvroy’s “algorithmic governmentality”
Oh and the current administration won’t pass legislation restricting your platforms or revoking article 230.
There's no reason YT can't detect chess from hate speech if they updated their training set. Maybe they weren't aware of this failure case, or they didn't get to fix it, or by trying to fix it they caused more false negatives. The way they assign cost to a false positive vs a false negative is also related.
Machine error; Machine learning error; AI error; AI fail.
Tbh, so do people. Somebody whos not familiar with chess or board games could think the same thing
* you, a programmer who coded something that caused company X to lose Y dollars because of a "mistake", have to be on a hook for Y dollars.
* you, a manager who managed a project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.
* you, a product owner who accepted the what the manager of ta project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.
* you, the CEO of the company that had a product owner who accepted the what the manager of ta project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.
Yes, I understand that the cost of a mistake is now higher than the loss suffered by X. That's the incentive to ensure that it does not happen because the wives or husbands or partners of the people who would now have to pay are going to ensure that they do not do take that whose wacky ideas and implement them -- wacky ideas are abstract but the loss of nice housing, nice organic food, nice daycare for the kids and nice scholarship fun is real.
Developers need to stop being so allergic to accountability. It's kind of pathetic
That being said, software development and civil engineering are very different in terms of risk management. There are strict regulations around what you do and if you play by the rules, you have minimum to no risk. Even if dozens of people die under a collapsing building you are not accountable if everything you had done was by the book.
Software development, on the other hand, is more like wild west. There are minimal regulations, only best practices. One developer can never know if the end product is free of any errors.
Pilots are a completely different story. The OP was talking about "causing company X to lose Y dollars", not human lives.
Nah, the opposite. Being paid $800k/year is a great carrot. It should come with a stick so fewer people do the cruise control.
You would have to pay CEO salaries since the responsibility lies on developers.
But ... would you then like to work in that area, if you fear every step you make, can be your last?
Do you think there would be much innovation, if the punishment costs are so out of balance?
Rename "black" and "white" to "second player" and "first player".
And remove the obviously sexist Queen (why stronger than King?) and King (why should it decide when the game is over?).
Rook is shady as well.
And so on. As Aristotle said: "once we look close enough, we will find someone offended".
If only! Clearly we need different chess skin colors so that everyone can identify with their chess pieces. I think five skin tones  should cover everyone.
See also: green usernames here on HN.
Note: I'm serious and not serious at all here at the same time.
No: the colors doesn't matter at all. Yes: If someone wants to get offended they will find a way, even how they are held outside since no one makes jokes about them.
I am French and we have a history (and culture) of parody, and freedom of doing it and being offended by it. This was the case when I was a kid and up until some 1( years ago.
Now criticizing or making fun of some groups is *dangerous* - you cannot publish a picture of the god of Islam without literally putting your life in danger. Yes, I know that this is a small minority yada yada yada, but I have only one head.
At the same time you can make fun of Catholics. They are not happy, but also somehow "protected" by the culture and history of France. This is similar to Jews, though they became more offendable recently.
The worst is that the people who are going to officially claim how much everyone is equal will say in private that someone is a "tapette" (I do not know the English word for that: derogatory expression for gay), or that Muslims are middle ages savages, or that Blacks are dumb (we had a few cases on TV where the invited person thought that they were off when they were not).
I do not know what the solution is - it certainly does not help the normal part of the communities above that people are afraid to put them in the same bag as Catholics (in France). They would not be officially immune of criticism/fun, but they would be perceived as "people like us".
Ha! True enough, "a good pun is its reword" ;).
It might add an element of subterfuge to the game.
I'd say that the one that has traveled the longest distance wins, and captures the piece that started closer.
(My implicit mental model is that both pieces "depart" from their home squares at the same time and travel at a constant rate, so the closer piece arrives at an empty square, while the more distant piece arrives second and captures the first-arriving piece. This may also encourage more dynamic play)
Anyway, I've been looking forward to this day, where SJW start trying to change chess because it's "racist" because white have advantage of initiative and "sexist" because it makes you sacrify the queen to save the king. Looking forward to see that level of stupidity being reached!!
It’s still ridiculous -like cultural revolution level ridiculous.
At least nature will never change and nights will be black and days will be light.
Ah, but one could argue that the colours of light and dark wood are closer to the skin colour of humans culturally classified as "white" and "black" than are the actual colours white and black!
> The Community lacks any color, memory, climate, or terrain, all in an effort to preserve structure, order, and a true sense of equality beyond personal individuality.
Chess is a game of war, isn't it? So whoever goes first is the aggressor.
So if Black goes first, that can also play into certain racial biases people have.
(For the record I think the whole thing is stupid)
For example “black’s light squared bishop” would become “light’s light squared bishop”.
Infrared-coating player and Ultraviolet-coating player? Chrome player vs. Brass player? No kidding, I like this last idea.
Chrome Player: sponsored by Google.
Sorry, I can't stop from posting this joke!
- Popular experiences tend to be better experiences, so we all congregate to the same services
- Homogenous user behavior leads to monopolistic situations, increasing outrage when anything goes wrong
- Even if the government doesn't try to enforce moderation, the company attempts to self-moderate to maintain its image
- The popularity of the service makes human moderation impossible, creating a need for inevitably-flawed robots
I see no solution. The only way to win is not to play.
That's true, but we might be able to improve things with a bit more human moderation.
For instance facebook is insanely profitable. They could probably increase their staffing for moderation by a pretty decent multiple and still be very profitable.
So the current state of moderation is not strictly a matter of need, it's also a matter of greed in terms of Facebook wanting to automate away jobs they could pay people to do. And given the state of online discourse, it's a decision we're all paying for.
Even then it will probably only keeps working until there's a rift in your community. Say people start arguing over trans rights (weirdly common on discord), and then users get mass-reported and mass-voted to be banned by an activist minority.
It was always assumed that unnecessarily vulgar language leads to escalation and in some cases that is certainly true. But the internet netted some evidence that this isn't a general rule. One the contrary, ambitions to sanitize speech can make communities extremely toxic.
i.e. Lots of people using a product that works for them..
"A foreigner, especially a British or a white person." https://en.wiktionary.org/wiki/firangi
Language doesn't change unless the population/community changes it. I think the bigger story here is the misuse of AI where a human moderator would've made a better decision.
Which may or may not include coercion?
The video I’m talking about is here: https://youtu.be/KSjrYWPxsG8
Anyway, Google will be fine. Lots of tech companies have managed to re-brand after getting on board with idealists, just look at Hollerith.
At least the human moderator who handled my challenge of it was able to consider context.
Yeah using language that upsets people is bad, but if you allow enough layers between words and concepts, _everything_ can be argued to be offensive for one reason or another. Or will be soon once something else becomes a hot button issue.
Shouldn't be. Intent should be the grounds for upset, intent only. Otherwise you get a Euphemism Treadmill, and that's a goddamn fucking waste of time.
If you ignore someone who is never happy then in time they may respect you, but if you always comply then they will never respect you and their perceived power over you simply increases, which means they feel empowered to ask for more and more.
It's literally a bragging point at my new Co.
It's hilarious watching the amount of mental energy people waste on this.
Unfortunately for the rest of us, they've been put in charge.
I would say there is. That's where the language comes from. BUT I do not think that necessarily means that this is a callback or reference to enslaved _people_. Master/slave model accurately describes the model in a way that people can hear the terminology and understand what is happening without knowing details. It doesn't condone/promote slavery, it doesn't have anything to do with enslaving people, and in no way does it even encourage such behavior.
Context matters, a lot. And to be honest I didn't hear anyone complaining about this (and I live in an extremely liberal place), so it came off (to me) as a grab for social currency (_especially_ since GitHub didn't use the term "slave". GitHub was using "master" in the sense of "main" or "principle" and so I didn't get even understand). If you try hard enough you can make anything reference race, but at the end of the day what really matters is context and how people perceive things. If no one (or realistically few people, because there are people looking to make issues) are making these connections AND no one is being harmed, then we shouldn't really be worried about it.
Edit: wanted to make clearer that I'm talking about two different terms of the usage "master". One from master/slave model and one that GitHub uses (which uses the adjective version of the word which means "main" or "principle"). And that the GitHub version does not reference the former version. I know we're talking contentious subjects here but I'm trying my best to convey what is in my head. I'm open to new opinions but bear with me. Language is complicated.
Except it has absolutely nothing to do with slavery at all. The master branch is akin to the master record, that holds the true and complete copy. A master branch evokes mastery of a subject, like a Masters degree.
Except now, everyone just submits to the idea that the word master only has context in master/slave terminology.
> A master branch evokes mastery of a subject, like a Masters degree.
I would actually say that that a Master degree is using a different definition (though both part of the adjective usage). For a master branch (or master document, master copy) I'd say it is the definition that is "main" or "principal". Whereas a Masters degree is having mastery over something, which is akin to high proficiency.
ah, but which people? the most reasonable and intelligent people? the traumatized and most sensitive people? The former I'd say yet, the latter I'd say go get some therapy and keep your trauma to yourself, it shouldn't drive the larger discussion.
Sure, it's organic, but there's no god-of-the-gaps in there. It is knowable. A language isn't as complex as a human brain. Nor is it living, it can be decomposed at will.
Linguists have been working like gang busters to iron it all out. Steven Pinker, Noam Chomsky, et al...
If someone gets up infront of a room of 1000 people and says "hey guys!", clearly intending to reference the entire audience, they're not being fucking sexist.
And if you're standing there with 5 racists showing a pepe flag or an ok sign or whatever random garbage it is this week, then you're a fucking racist.
There's absolutely no need for nuance or ambiguity in either case.
You can argue the toss all you want about how to determine whether someone is intending to do something (it's not like you can read minds so that can obviously have a ton of complexity to it), but if you're starting from a position that people can unintentionally do something offensive then everything that follows is just pointless bullshit. You've just built a trump card for both sides into the argument so they can just scream past each other like morons without contributing anything meaningful.
Communication is a multiple party activity. It's not just a speaker and a speaker's intent. The recipient and how it's received absolutely matter (and should). I've said plenty of things I didn't intend to be stupid. Still stupid.
Is it ok for the Washington football team to be the Redskins because no current fan or owner intends to be using a racist name?
It's not only the hearer getting upset that matters either. There's room for error and for grace and tact on both sides of a conversation. But it's definitely not just intent only. Humans don't work like that. Hell, even computers don't work like that.
People can be unintentionally upset. I'm sure we can all think of a time when we've accidentally upset someone. I still try not to do it.
We don't have to force people to change their language. It will happen over generations as we discuss these topics.
I disagree. Intent matters, a lot, and you're right, but it isn't the only thing. Right now I think we fall on the other side, that reception is all that matters (in the bias training I receive they specifically mention that it is 100% reception and not intent). I believe the law works on reception because that's easier to quantify. Intent is very tricky. You can do something that most people would consider wrong and just say "well I didn't mean it that way." (the inverse can happen too, but less people are likely to start a legal case out of spite compared to people defending themselves. It is tricky)
I believe that there is a middle ground somewhere. Where that is I'm not sure and I think we need to work together as a society to figure that out. I think somewhere in there there is a "reasonable" set of norms, and we have other laws to suggest that we can use this as a basis. But even this can be tricky as there are many different cultural norms and customs. It isn't even just ethnic customs. In America we have very different regional customs that often butt heads. I think we need to recognize that people are different and operate based on different values and often this is fine.
But I think a big thing we've lost in our current standing is good faith. There's three parts to any form of communication. 1) The idea that is within one's head that they are trying to convey to the other person. 2) The words, body language, inflection, etc that are used to codify this idea (aka: encoding). 3) The understanding of that language that was used to convey the idea (aka: decoding). Humans are pretty good encoders and decoders (we wouldn't have made it here where we are if we weren't) but there are limitations. Language is extremely messy and we often don't think it is because we're so used to it. But you can look at words being used today and you'll often find that people are talking past one another because they are using different definitions of the same word and actively refuse to interpret the other person's intended message (as an example, every internet conversation about capitalism/socialism/communism).
The point of communication is to pass one idea from one head to another head. It requires understanding that there are these three components. If we do not act in good faith then we cannot communicate. With that knowledge it suggests there are two different actions to take if one wants to act in good faith. The communicator should try to encode their thoughts as best as they can, attempting to understand their audience (aka: speak to your audience). BUT we often forget that the listener's job is to decode, to do their best to determine the idea that the communicator is trying to convey (aka: __intent__). In fights we will say "but you said..." even knowing what was intended as a way to win. This is not in good faith but is so prevalent.
When conversations are about mic drops and one upping another person, communication cannot be had.
It's funny, this is often presented as a supporting example to limit citizens' free speech or other rights, typically paired with something along the lines of, "no freedom is limitless." Of course many people don't realize this example is a) outdated and currently false, and b) the argument used against citizens speaking out against WWI using the Espionage Act of 1917, which is considered by many to be one of the worst and oppressive laws to our rights ever written.
You may recognize the names of some of the act's victims:
Among those charged with offenses under the Act are German-American socialist congressman and newspaper editor Victor L. Berger, labor leader and five-time Socialist Party of America candidate, Eugene V. Debs, anarchists Emma Goldman and Alexander Berkman, former Watch Tower Bible & Tract Society president Joseph Franklin Rutherford, communists Julius and Ethel Rosenberg, Pentagon Papers whistleblower Daniel Ellsberg, Cablegate whistleblower Chelsea Manning, WikiLeaks founder Julian Assange, Defense Intelligence Agency employee Henry Kyle Frese, and National Security Agency (NSA) contractor and whistleblower Edward Snowden.
Let's use another hypothetical just for fun. Let's say I called someone a bad name and they punched me, would I bear the legal responsibility for their lack of self control? I would think not. I would think the person who threw the punch would be charged for assault and I would be charged with nothing. If that's the case, how would that be different from the fire example? Someone spoke and someone reacted with no self control. I would think the person who reacted with no self control would bear at least some responsibility.
What about yelling bomb on an airplane?
Agreed that anyone who decides to demonstrate the above is definitely going to want plenty of money and free time at their disposal.
I'm of mixed opinion whether people were actually more intelligent or level-headed back then, or whether the current "ultra-PC/SJW-ism" trend actually started as a joke that got taken too far and adopted as truth by the gullible.
It started with a few German philosophers and social theorists in the Western European Marxist tradition known as the Frankfurt School in the interwar period.
I think you're actually correct, although the roots go back further. People don't understand the historical academic context of modern intolerance.
Also, so what if someone was offended? Isn't that mostly irrelevant to this debate? The goal isn't to stop people from offending others, the goal of changing our speech is to reduce the unknown harm that words can do re: normalization of hatred of minorities. The 'jokes' you describe aren't funny and do in fact have a potential to cause real harm in the world.
I would posit that unchecked hatred towards minorities online for decades is one of the reasons we are in this 'mess' of language today.
That's an opinion, and it's not a supportable opinion, because we don't really know much about the jokes. Parent's comment wasn't intended to convey the material faithfully-- just a bare description. We don't know the exact wording, the delivery, the timing, or any other context. Maybe they were lame (another opionion), but it's also possible that I (or even you) would have gotten a chuckle out it.
Regardless, there's no reason to assume that humor of this nature serves to normalize racial hatred. But if you assume the worst of people, you're certainly more likely to get it in return.
Yes, there is. This is a very well-researched topic.
So, let me rephrase that
> after hearing jokes featuring homosexuals, the anti-homosexuals (however those where determined and chosen for the study) where anti-homosexual. The not-anti-homosexuals where not anti-homosexual ater hearing jokes featuring homosexuals
If anything, that link disproves your own point.
Yes I think that one can be dismissed.
> A field currently drowning in a reproducibility crisis
My peers have told me that chemistry and biology also suffer from results that are difficult to reproduce, and I've certainly read a number of articles here that decry the lack of reproducibility in computer science too.
> a group who believe that lying and slander is not only okay but should be actively utilised in every goal they pursue.
I'll be honest, I'm not really sure what this is in reference to. If it's in relation to psychology experimental methods, then I believe you're incorrect. Methods that involve actively deceiving subjects would be rejected by ethics boards (at least, it would in the UK). On top of that, there are many papers that do not use observations of human behaviour, and so would not find use in lying to them - for example, many neuropsychology papers discuss the physical makeup of body parts.
Psychology has been around for a long time, and some psychology results have deeply influenced society. Some of these papers cover the placebo effect, and various mental health conditions. If you are dismissing all psychology papers, do you also reject these influential papers?
I'm sorry if this reply is a bit full-on, but dismissing a claim's provided evidence by dismissing an entire academic field seems a bit extreme to me.
Following the reproducibility crisis they can't be trusted on face value. When used to promote SJW and progressivist causes they can be almost certainly dismissed.
> I'll be honest, I'm not really sure what this is in reference to
That was in reference to progressivism, hence why I stated that in the comment.
> Ah yes a progressive claim backed up by psychology papers.
I don't see how chess is any different.
Blacklist and whitelist are used as linguistic symbols: black==bad white==good.
That is pretty different to me.
I'm open to being wrong but to be this connection of archetypal meanings and skin color is a stretch. I don't look at a white phone or black phone and think good or bad (in fact I have a black phone and prefer dark colors while my skin tone is the opposite). One which requires a lot of fundamental change in language and how we think. Because I'm sure I'm not the only one that has codified this representation in my mind. And most of us should understand archetypes are not how you go about judging the world or people. I don't see a person dressed in red and think "angry" (which would be a different emotion in a different culture), or yellow and "happy". I just see colors.
Perhaps this would be solved if we used different words to describe skin tone than we did light. If “white skin” was called “wumbo skin” and “black skin” called “mumbo skin” it would be more clear the etymology of which terms were referring to day vs night rather than skin tone.
There are people who get upset about the language used to communicate results of scientific studies proving the efficiency of vaccines against the ongoing pandemic.
Kids get upset at the language “No” even when uttered to tell them that they can’t go out and play with a chainsaw, purely for their own protection.
You cannot determine if language or language usage is bad purely from the response of others even if they get extremely upset about it.
So we may just get some people who push back and tell people that chess isn’t racist and it’s people who are injecting race where it doesn’t exist (such as here in chess) who are the problem.
Depends, do you speak Spanish? If so, there's a governing body - the Real Academia Española (RAE) - and they have referred to the "x" ending as an abomination. It is rejected from the style guide and not acceptable Spanish.
If you want to speak Woke Proto-Spanish, by all means do. Just realize it's not Spanish and it's spoken by a tiny fraction of a percent of - generally American woke-sters desperate to cling to Latin or Spanish culture as they realize they are actually American and as such - not oppressed minorities (the worst of fates!). This is why Oxford recognizes "Latinx" but the RAE does not.
Because I see people using it all the time on TV. It's not an imagined problem.
I find it dubious, since there are many Indians, and some tribes have taken the stance that they don’t like that term.
American Indian: same as Native American.
Isn't this connecting a Latin conjugation? Which in turn would be westernization? I understood westernizing people to not be the right thing to do. (Which to be fair, Spanish does originate from Europe but Latin people are not). I never understood this. If someone has a good explanation I'd love to hear.
The only people I've ever seen mad about "Latinx" were American internet free speech advocates.
But that's not really true. I always learned that, for example, ils (grammar-masculine they) should be used when referring to a group of people where any of the people are sexual-gender-masculine, but elles (grammar-feminine they) should be used when referring to a group of sexual-gender-feminine people. Ils and elles have the same rules when referring to a group of inanimate objects depending on the grammar-gender of the objects.
Interestingly, other approaches existed in the past like the rule of proximity where the gender of the closest element will dictate how the verb and adjectives will be written.
Languages are an ever-changing thing. I think it's healthy to propose and discuss grammatical changes if it makes sense, but everyone should be aware of what they are actually talking about.
It also underscores why some people think the media is a partisan mess. It is to some degree at least. They even asked people and most didn't like it. Didn't stop them.
> In 2019, Magnus Carlsen and Anish Giri – who as of July were the number 1 and number 10 players in the world, respectively – promoted a #MoveforEquality campaign as a way of acknowledging social inequalities. In their game, black moved first and the line was, “We broke a rule in chess today, to change minds tomorrow.” It was billed as an anti-racist statement, but some took it as a suggestion to change the rules of chess to black having the first move.
This is technically appeasing racists though.