"Cognitive distortions" are the only tools we have to reason about anything in the presence of limited information (which is basically always). Its basically a toolbox to let you discredit any thought whatsoever, which is convenient when a patient writes down negative thoughts and the psychiatrist can just hand them a list. But it would work just as well on positive thoughts or any thought whatsoever.
all of the time, all of them, all the time, always happens, always like, happens every time, completely, no one ever,
nobody ever, every single one of them, every single one of you, I always, you always, he always, she always, they
always, I am always, you are always, he is always, she is always, they are always
It's not always overgeneralizing to use one of these. "Every single one of them wore black" is (potentially) completely factual. A large increase in their use in books, however suggests that authors are overgeneralizing more than they did previously.
Phrase list: https://www.pnas.org/content/pnas/suppl/2021/07/22/210206111...
* Media which tries to sell rather mundane news as sensational (exaggerating)
* Management literature which tried the last 30 years hard to push people to the limits, to walk the last mile, to strive for the best, greatest, mediocrity is the evil.
Was I trolled into pedantry by your comment?
It solves one problem but walks right into another. Your counterexample would be observed as so-called sealioning, "concern trolling", or one form of the ever-expanding nebulousness of "gaslighting". How does one diffuse obstinate hypocritical crusaders while proving one's own point? The solution doesn't seem to be making a well-made statement. Disagreeing is considered making a "bad-faith" arguments. The answer isn't pointing out their fallacies and thought-terminating clichés. You'll be accused of co-opting an dog-whistling.
The only solution I've seen is to do or say as one will as though others don't exist. A lesson learned in one of Aesop's fable, The Miller, His Son and His Ass.
> This pattern does not seem to be driven by changes in word meaning, publishing and writing standards, or the Google Books sample
To say "social science studies are bullshit" implies a more generalized claim to me than "social science studies are always bullshit"
People use overt generalization words like these, in my experience, to indicate generalization with obvious exception.
This study is selecting for precisely the wrong thing, imo, or is not properly interpreting what they've selected.
The authors need to understand the difference between rhetoric and factual reporting.
It's not unusual to say "This is the worst thing in the world" to make a point without believing that it literally is absolutely the worst thing in the entire history of human experience.
No, clearly they aren't. But you know what does qualify as a cognitive distortion? That very statement. The "only" way to reason about "anything" based on limited evidence? Really? I mean, Kalman and Bayes would maybe like to have words.
 I can see a few, but I'll go with "dichotomous" as the biggest mistake you made. You lept straight from "Sometimes these mental tools produce correct results" (which is true) to "These tools are the only way to produce correct results" (a ridiculous distortion).
I'm a little dumber, I just overgeneralize to "fad diet X is probably not that great" and skip the hours long scientific paper review & construction of bayesian models and priors.
Here is a recent tweet claiming this paper is badly flawed because of changes in the content of the Google Books data this study was based on. Make sure to update your bayesian models accordingly! Should I lower my prior of "this recent scientific study is trustworthy so i should take it at face value" by 1%, or 5%, or 50%?
This part isn't dichotomous, it's just a strawman (something that I'm guessing would go into the "overgeneralizing" bucket in the linked article).
I didn't say you had to use fancy math to live your life. I replied to your statement that one had to use distorted reasoning, and cited the existence of two named techniques (about reasoning from limited evidence) as a fun way of making the point.
My point wasn't about what you said anyway, it was that you were shouting and flaming about it using exactly the techniques the linked article was noting are on the rise. You see that, right?
For me, analogies are like an easy crutch that can get you that last mile.
Those aren't mutually exclusive. Something can do something only sometimes, and also be the only way to do that thing. If he/she is wrong, It should be easy to prove that their claim is incorrect though by thinking of a counterexample
These are not simple bugs. In fact we know today that working on changing these thinking patterns can in many cases help alleviate the underlying disorders (that's the C in CBT)!
To come back to your example: Being cautious is not catastrophising. Feeling a stress response or even a panic attack or anxiety because of pathogens is (we are taking a broad definition of catastrophising here).
Also beware of the fallacy that positive thoughts are the opposite of negative thoughts - they aren't. They are distinct factors that share some correlation.
Have you heard of toxic positivity? There’s a growing movement against incessant positivity that has become endemic to a lot of online discourse in particular on Instagram (def not on twitter).
Nothing ever sucks, it’s a challenge! Nobody is sick, they’re a fighter! Nobody is a mean prick, you just need to understand their perspective! Nothing you do is ever stupid, people are just haters!
I suspect it’s a byproduct of “enlightenment now” movement which asserts that humans haven’t had it so good as they do now. Worldwide poverty is the lowest ever, least violence etc. There’s a minor issue of growing inequality and impending climate disasters but that’s all ok and will work out well. Its torchbearer is Steven Pinker and he’s ably supported by, unsurprisingly, the richest 0.1%.
"If you had worn masks or started being cautious before COVID" you wouldn't be catastrophizing, because the identifying symbol of catastrophizing is concluding with high confidence a catastrophe from limited or contradictory evidence. We had plenty of evidence for pandemics (and most epidemiologists saw this coming for decades).
I believe you might be confusing cognitive distortions with heuristics (which are similar, but not the same).
Cognitive distortions correspond to irrational thought patterns or lines of reasoning.
If you have limited information, by example, you're not supposed to just jump to a conclusion (except in a matter which requires immediate response), and this is often seen in many mental health disorders where, by example, a person may jump to conclusions such as "everyone hates me" or "I'm screwed for life" from isolated events which predict no such overarching conclusion.
Those kinds of cognitive distortions are not the only tools we have to reason about anything in the presence of limited information, we can often do much better, which is why it's used on CBT.
So when someone says something, you can almost always say they’re being too general and point out some obscure exception. It’s better to just take every statement and implicitly “… in most cases”.
Oh? You have seen the sky over the rest of the city. You can see a long way, in the sky.
...at least when it's sunny, that is.
And one thing I've learned is that there is a wide incentive to embed cognitive distortions in the news, stories, and attitudes which comprise culture. Telling people they are victims of more powerful forces beyond their individual control.
Victimhood mentality is the worst mindset a person can have. I wouldn't wish it on my worst enemies.
First, being victim of forces beyond their control is true for almost everyone.
Secondly, it's probably more healthy for people to not understand or believe it.
Most outcomes, good and bad, lay between those extremes.
I think this misunderstands the crucial difference here. The phenomenom described is not about power dynamics, it is about storytelling: Many don't even care to investigate whether there are really powers there who are at fault for their suffering. In fact it is quite the opposite: Because they feel special as individuals they cannot stand the idea that some powerful force damages them as mere collateral, because this would mean to admit to be small and insignificant. At least I'd say this is the conclusion they would typically reach after looking into the real causal relationships behind the powers and their lives based on facts.
However, it might be much more comforting to tell yourself a story in which "they" target you or your people specifically and intentionally. The more evil you can paint "them" the better, because it makes your cause more noble and heroic.
Telling yourself these kind of stories might be comforting, but cements the role you are playing within that society: Forever a sucker, a playball of forces you cannot comprehend, easy to manipulate, gullible, afraid, angry. Healthy as long as you don't march to Russia like my Nazi-grandfather did when he was 16.
Displace any individual considered healthy about 5 decades back or forward and they'll appear unstable. Make it 5 centuries and they'll basically appear fully psychotic (and not just because they just travelled through time).
Things have gone well for me but in my longer life I've seen many many people who were legitimately "victims of more powerful forces beyond their individual control".
In America, the zipcode of your birth determines your future income far more than your performance in school, for example.
Just on my Facebook yesterday, someone I'd known for twenty years died of cancer, leaving two young kids. She'd led a normal life without any bad habits like tobacco or alcohol and it was simply bad luck - powerful forces beyond her control.
Compassionate people understand that the real world is difficult and uncertain even for dedicated and hard-working people. The whole "victim mentality" is almost always a symptom of other far-right beliefs that would be less palatable if spoken out loud.
"Individuals with depression are prone to maladaptive patterns of thinking, known as cognitive distortions, whereby they think about themselves, the world, and the future in overly negative and inaccurate ways. These distortions are associated with marked changes in an individual’s mood, behavior, and language. We hypothesize that societies can undergo similar changes in their collective psychology that are reflected in historical records of language use. Here, we investigate the prevalence of textual markers of cognitive distortions in over 14 million books..."
So... the finding is that language patterns typically associated with depression have rapidly become common in book language. Interesting. The interpretation is up for debate, I suppose. Maybe its just writers are more depressed.
Financial events are labeled, but they don't seem to have impacted the data much. Internet usage, OTOH, seems (at a glance) highly correlated to whatever they're measuring. Maybe online culture moved language in this direction with no real relationship to depression. Maybe the internet made people more depressed. Maybe the internet made writers more depressed. Maybe some complicated knot of those. IE, the internet popularized maladaptive language, which has made us all more depressed.
In any case, assuming the methodology is reasonable, it does look like they've found something here. Worth a discussion.
That doesn't disagree with the hypothesis that it's a society-wide phenomenon.
A big problem is that they only count one side.
"Fortune-telling: Making predictions, usually negative ones, about the future" - counts the phrases: "I will not, we will not, you will not, they will not, it will not, that will not, he will not, she will not".
One would expect that they'd also count "I will, "we will", etc. and show a ratio. But no.
This is measuring something, but what?
No - what you want to measure is the absolute number of occurrence, not the ratio, the reason being that "making negative predictions" is not the opposite of "making positive predictions". Similar to the fact that negative and positive affect aren't opposites but distinct factors (that share some correlation though!).
'Common sense' can easily lead you astray here. It's a good example for something that looks simple, but really isn't.
To make matters worse, journalism is not longer a career that provides a path to a respectable middle to upper middle class life, so you have an entire profession where those working in that profession have generally depressive prospects in life.
Want to make society content? Keep journalists content.
So, maybe in the USA it started early and the it spread to other parts of the world thru social media. One way or another it seems important to study the reasons and the effects of such change of language.
Then came 9/11, then MySpace launched in 2003.
Reaganomics was the early 1980s, not sure how you're seeing it as closer to the spike than social media.
I mean, why would it be a 'distortion' to be somewhat fatalistic about the current climate change trend that may actually lead to civilizational collapse and an extinction event.
How about famine, war, epidemics wiping out a third of the population, totalitarian oppression, etc.?
"It is suggestive that the timing of the US surge in CDS prevalence coincides with the late 1970s when wages stopped tracking increasing work productivity."
I don't agree with the authors' summary of their own graphs. If you look at the graphs, it's more like "cognitive distortions hit an all-time low around 1980, began slowly inching back up from 1980-2000, then accelerated more rapidly upwards after 2000, leading to a clear trend break by 2005 or so."
I've also heard conflicting info about this and would like a more definitive analysis if anyone knows of one. I've seen people claim that total comp, including especially health care benefits, did not stagnate and continued to track productivity.
Health Care as a job benefit ensures that that fraction of compensation is always spent on health services, at minimum. In many ways it is more of a subsidy to health insurance companies and health care providers.
Even if authors are part of the population and living (some of) the influences that the general population have, there are a whole ecosystem and pressures that should be taken into account, including changes in the editorial ecosystem with time.
There are other sets of public data that may or not track the general population, like social networks activity, blog posts and comments on different sites. But that is affected by changes in culture, population and external influences (including disinformation campaings), and the selection of sites may select also the kind of users that may add a bias to the results. I wonder what deviations would be seen at i.e. slashdot that should have around 25 years of comments, with all the previous objections that I already said.
After the “death of god”, man lost a common frame of reference and replaced highly spiritualized modes of existence for highly economic modes of existence. Well, now we are facing the “death of the market” and finally realizing that it is a destructive fiction and that it is not reasonable to reduce everything to exchange value and calculation insofar as it leads to the actual existential crisis that is climate cataclysm.
The only way to argue this is a "greed became dominant when man abandoned [our idea of] true worship" approach, which many religious people indeed argue but which is a pretty clear "no true Scotsman" type fallacy when held to the light of day.
Marx’s whole point was to argue why this proposition was false (why “everyone acting according to self interest” actually led, in spite of the good it produced in the short term, to longer term evils, like proletariatization, exploitation of workers, concentration of wealth, etc. though Marx did not forsee (and couldn’t of course) the catastrophic effects self interest would have on the climate)). The whole point is that the progression of capitalism up to this point has proven we need more than just “self regulating markets or self interested “rational” agents” since we’ve come to see the absence of other shared values leads to species level existential threats (again, climate).
We hit bottom when the Catholic Church and the Boy Scouts both turned out to have thousands of pedophiles in positions of authority.
Of course prior to 1980, the church had never done anything to harm those it claimed to care for. Not sure why Catholics are singled out here.
For example, the methodology for detecting "cognitive distortions" is extremely simplistic, relying on short phrases like "I am a", "everyone thinks", and "still feels". It is far from clear what an increase in people saying "I am a" means, and what it really implies about the rate of genuine cognitive distortions or depressive thought patterns in the population. The reasoning here is basically "X correlates with Y, X is up, therefore Y is up". That's not a strong argument. These data are a starting point, not a conclusion.
The authors also seem to be leaning so far into a desired narrative that they can't accurately read their own graphs. The rapid rise in CDS clearly begins around 2000 - not the late 1970s, which are just the low point of the graph.
You kind of have to have a hypothesis, then "test" it with data. A more reasonable approach (IMO) would be "We found this interesting phenomenon" followed by inevitably speculative interpretation. That's realistically how it works in practice, but to be published you need to fit a popperian formula. A more ponderous version of this probably wouldn't have been accepted.
I mean, using a method of diagnosing depressed individuals on a collective psychology of a society is already veering into wtf territory, in terms of interpretation. OTOH, it is interesting and probably significant somehow.
IDK how or what they could have done better, and this does seem like a result worth publishing, assuming the methodology is good. Ultimately, I think people reading the paper mostly read it that way anyhow. Authors found X. Seems interesting. Here's how they're interpreting it. Here's my speculation.
Journalists OTOH, they'll cite the hypothesis verbatim if it fits the article they're writing. I think that's where the shoehorning is a problem. As long as you're reading the paper directly yourself, who cares what order the paragraphs are in.
It’s certainly useful data, but otherwise it lacks any meaningful analysis.
That said, this can't be "useful data" unless you are willing to accept non falsifiable interpretations. There's no way of formulating such an interpretation.
Whether or not we call it science is semantics, highly loaded semantics. We could stop calling it science, but then we'd have to also stop being derisive of unscientific methodology.
More realistically, maybe we should rename "the scientific method" to "the Popperian standard." After all, both the term science and its associated culture predate the definition/standard you are alluding to.
The researchers themselves haven't done anything wrong. They're researching interesting things in theory field, and publishing in the format journals demand... ostensibly to satisfy objections such as yours.
You can't have it both ways.
Agreed, that is mostly how it goes.
On a similar note, null-hypothesis testing is not a true and tried scientific method. It started sometime in the early 20th century. Aside from p-hacking and mistakes of that kind, there are many philosophical problems with the use of statistical inference itself.
One professor's statement (citation?) doesn't discredit the Proceedings of the National Academies of Science. I would need much more evidence than that. A snarky comment - a common means of discrediting institutions, a popular trend these days - costs nothing to make, and carries little evidence, reason, or meaning.
It's a predictable badness that these "revelations" are specific sentiments about what follows, aka "the nouns" (more or less). "Still feels X," "I am a Y," "everyone thinks Z," all require that X, Y, and Z all have fixed meanings.
I don't think this is taking full account of their claim. They're saying it's a "hockey stick." If you look at the graph like that you can understand what they mean.
For me the study is interesting because I observe the very same change in music lyrics and movies. They become vividly grimmer compared to earlier years.
I have seen a young person explaining it by stating young generation expresses emotions in a more toned down way.
On the other side, if we perceive books and other forms of expression as a way to share stories receiver can not live by himself, it would mean the opposite, that people lives became better. And because of this art fills the void of negativity. An explanation that is close to my understanding.
Either way there is a collective shift in expression and it is interesting to research it.
My follow up questions to this paper would be which books exhibit this depression-associated language? Does it vary by genera? What happens if you weight by sales/popularity. I'd also be curious to know if this is reflected in news media, either print or television. I feel that news was gotten more emotive and emphatic, but not really sure.
Do you have any data to support that more generally? Necessarily, anyone's experience of a field so enormous as music will be narrow and biased.
(It's not necessarily a problem; every other year for instance probably wouldn't be a cherry picked slice.)
No. I threw out all my CDs years ago.
* Impact of social media on mental health
* Use of CDS by print/online media
* Social divide and tribal thinking
* Perceived instability and uncertainty
Perhaps spurious, but could be seen as more evidence for the contamination theory, since depression and weight gain are correlated.
My previous go-to ngram to link was "participation trophy", which previously had its biggest spike in the 1950s. There's a massive spike now, starting 2006 or 2008, but my rhetorical point still stands: participation trophies were a big thing -- in writing, anyway -- when Boomers were kids, not Millennials.
I seriously worry about the trend and I am not very hopeful about the direction it takes.
I have done things like look at specific dates that turn up in Wikipedia and you see some things that are real but when you get close to the time frame Wikipedia existed sampling effects are strong.
It would be fun to look at ‘I am a *’ though.
The data that supposedly prove heavy metal toxicity are extremely suspect, sometimes outright bizarre.
1. They are elements, so no major changes could possibly occur. Thry appear in all recent geological history.
2. Their toxicity somehow went unnoticed until recently. There are no records of cadmium being considered toxic before 1970. Somehow nobody noticed the toxicity of the most toxic known metal. Thallium was noticed to be toxic soon after its discovery. Lead was only known to cause poisonings in very large amounts and mostly limited to breathing exposure, and was used in things like make up and even sweeteners. Presumably any noticeable toxicity would prevent it from being used in such cases.
3. The purpoted mechanism is absurd. The body picks the metals to be incorporated proteins with great accuracy, and pick the heavy metal, which poisons the protein and activates it when it should be inactive. It just sounds completely absurd to me.
4. Heavy metals only accumulate until certain concentrations are reached, then they start getting excreted. This effect is widely noted.
I'm going to need a reference for this because I was under the impression that there is no safe level for lead in the body, and it serves no biological purpose.
• Chronic Pb exposure could result in a lower weight gain in rats and a higher Pb content in the brain of model rats.
• Pb exposure reduced activities of key enzymes of glucose metabolism in the brain.
• Pb exposure could disrupt the insulin signaling pathway in the hippocampus of rats."
The glucose transports are increased, (GLUT3 is the one that supplies axons) and the rats weighed less.
Almost everyone who rich people interact with are at least relatively well off and doesn't have to work too hard for their money so they don't see or relate to the suffering and hopelessness of the poor who are desperately competiting for their attention.
Wherever the rich look, things start improving - But where they don't look (which is most places), things are always getting worse.
In this crony-capitalist system, the attention of a rich person is as good as money.
The monetary system is to blame for this. When currency isn't backed by anything, the economy and society becomes 100% about capturing the attention of rich people. You cannot compete in this system without the approval of rich people. No matter how much better value your products or services may be; you can never compete because their earnings are mixed in with easy money straight from the money printers, yours aren't - You can never beat the margins of a big corporation which has direct currency pipelines to hedge funds, governments, etc...
Money should not be so important but when you are far from the money printers, it's the only thing a rational person can think about. Getting the attention of rich people is the only way to get closer to the money printers. It's not a distortion to see things as negative or bleak. Things really are bleak for most people. The real distortion is thinking that everything is fine.
> whereby they think about themselves, the world, and the future in overly negative and inaccurate ways
Is it not possible for 'rock and hard place' depression to be an _accurate_ negativity about the world in the face of oftentimes seemingly impending climate disaster?
And not just politics. Most people can't analyze a car accident without a pile of cognitive distortions.
"I’m a Parkland Shooting Survivor. QAnon Convinced My Dad It Was All a Hoax."
It's good that the authors are at least trying to create some generally interpretable categories mapped into language, even if they may not be (yet) fully convincing, as other commenters point out.
It's an interesting idea (i.e., analyze books) but the context is going to influence the results.
When I look at the popular ideas on all sides of the political spectrum, the trend seems to have been toward ideas that are not only more one dimensional and extreme but more irrational and incoherent. I regularly come across online comments that are borderline word salad, a blathering incoherent mess of the sort that would in the past have immediately led to questions about schizophrenia.
It doesn’t seem to be a specific idea so much as a decline in the lucidity and coherence of cognition itself. The ideas are inane, but I can’t imagine such inane ideas taking hold to such an extent in earlier eras.
I have only two hypotheses that seem like they make sense: gamified social media and CO2 concentration impacting metabolism. I lean strongly toward the former because around 2010 is when algorithmic timelines started to be introduced and it was right around then that I remember a tangible sense of sharp decline. I had to include the latter for completeness, but I hope not as the latter would be far scarier.
Social media companies are the tobacco companies and opiate dealers of the information age. The more I see of them the more I am convinced they are an objective evil and create net negative value. The algorithmic weighting of content for engagement seems to be the real problem, but it’s at the heart of their business model now.
> I regularly come across online comments that are borderline word salad
If this was an actual trend, which I wouldn't want to assume, don't you think there are simpler explanations to this? I'd immediately come up with two:
- The internet has gotten way more accessible (+ most of the new users aren't native english speaker)
- ML Bots
I think running a similar analysis as this (and other analyses) on the last 10 years of popular mainstream subreddits would yield some interesting results.
Or, perhaps a new meta-social-network of some sort will appear on the scene that can point a portion of our massive biological compute power at the phenomenon and try to gain some understanding of it. I worry that if something isn't done, the system might start falling apart.
With invoking the words, "absolute evil", you seem to also take part of your observed trend:
When I look at the popular ideas on all sides of the political spectrum, the trend seems to have been toward ideas that are not only more one dimensional and extreme