Hacker News new | past | comments | ask | show | jobs | submit login
Research in psychology: are we learning anything? (experimental-history.com)
180 points by ctoth 6 days ago | hide | past | favorite | 212 comments





One big area of psychology not mentioned in the article that has been seeing a good amount of success is applied psychology with respect to Human-Computer Interaction.

For example, there's a lot of basic perceptual psychology regarding response times and color built into many GUI toolkits in the form of GUI widgets (buttons, scrollbars, checkboxes, etc). Change blindness (https://en.wikipedia.org/wiki/Change_blindness) is also a known problem for error messages and can be easily avoided with good design. There's also a lot of perceptual psychology research in AR and VR too.

With respect to cognitive psychology, there's extensive work in information foraging (https://en.wikipedia.org/wiki/Information_foraging) which has been distilled down as heuristics for information scent.

With respect to social psychology, there are hundreds of scientific papers about collective intelligence, how to make teams online more effective, how to socialize newcomers to online sites, how to motivate people to contribute more content and higher quality content, how and why people collaborate on Wikipedia and tools for making them more effective, and many, many more.

In past work, my colleagues and I also looked at understanding why people fall for phishing scams, and applying influence tactics to improve people's willingness to adopt better cybersecurity practices.

Basically, the author is right about his argument if you have a very narrow view of psychology, but there's a lot of really good work on applied (and practical!) psychology that's going on outside of traditional psychology journals.


As a counter-argument, HCI was investigated pretty thoroughly in the 80s and 90s, and operating systems of the time actually had the results of that well implemented in them. I feel that modern OS developers seem determined to throw away all these lessons.

Don't get me wrong, I think the modern HCI on mobile phones is remarkably good. But I haven't seen any improvement (except maybe the mouse scroll wheel and having a higher resolution screen) on real computer interfaces since the 90s.

And then you have some real useful psychological theories on attention and user-guiding that are used for evil to create antipatterns. I don't think we're making progress.


I think we should be careful to distinguish the question of if we are growing knowledge and the question of if we are using the knowledge (and if we are using it positively). If we aren't using it, there is an interesting question of why, but I think there should be a clear difference between not finding knowledge and not utilizing the knowledge we find.

I've a theory that most UX/UI developers started in their youth as gamers, especially in "twitch" genres, because many interactions for me are now closer to playing Descent than typing a paper into Wordperfect.

> Don't get me wrong, I think the modern HCI on mobile phones is remarkably good.

One of the challenges of psychology is individual variation. Humans have more in common with one another than we have differences, but individuality is a major factor that forces psychologists to look at things statistically unless they are specifically trying to understand or control for individual variance.

I bring this up because my personal subjective opinion is that HCI on modern mobile phones is absolutely atrocious and I don't use a smart phone as much as most people as a result.

I think that when it comes interacting with a tool, what you are accustomed to makes a huge world of difference. I grew up with Desktop computers and laptops. With keyboards, in other words. As a coder and a *nix "power user", I like command line interfaces. I like being able to tweak and customize and configure things to my liking. When I have to use Macbooks at work, it has been soul crushing to me while for others they absolutely love the UI of MacOS.

I also remember the shift of the mobile revolution. A lot of us at the time were starting to get very annoyed by the creep of mobile design conventions making their way into non-mobile contexts. At the time it was understood that those mobile design decisions were "forced" as a result of the limitations of a mobile device, and it was clear that applying them to non-mobile contexts was a cost-cutting measure (mobile first, in other words).

Although well designed iconography can transcend language barriers and facilitate communication, I find that the limited resolution of a smart phone screen forcing designers to use glyphs instead of written text is very confusing to me. I mean, don't get me wrong, I would love to learn ancient Egyptian, but it is often far from intuitive or obvious what these hieroglyphs on the screen are meant to communicate to me. In other words, the iconography is not well designed IMO. At least not in a way that creates an intuitive experience FOR ME.

But a kid who grew up in a world of smart phones is going to be able to navigate them intuitively because they have years of learning what those esoteric glyphs on the touch screen are. They've had years of "typing" out text messages on a tiny touch screens.

On a good mechanical keyboard I can type upwards of 117wpm before I start making mistakes. When trying to text my wife one sentence I need to put aside an afternoon out of my day to get it written correctly. I could get started on how awful auto-correct is but everyone knows this to the point where it's become a cultural meme. Sorry, auto-correct turned "Can you grab me some milk while you're there?" into "fyi the police are here with a search warrant."

So yeah, big tangent off of "HCI on mobile phones is remarkably good." Maybe it is in a relative sense and is as good as it can get... I mean we've had years to iterate and make improvements. But I suspect that a lot of it has to do with people just learning and getting used to haphazard design decisions that just became the defacto for mobile because the tech industry (and business at large if we're being honest) loves to copy.


I also was raised on using a keyboard to interact with a computer. I agree with a lot of your points - the UI on a mobile phone is not very good at doing text-based stuff, but I think that's OK, because I shouldn't be trying to do large scale text-based stuff on such a tiny screen with a tiny input area.

What works well on the phone UI is the way that the touchscreen has been integrated well into it, and the various gestures are mostly highly intuitive as to what they do (although if we could stop maps rotating when we try to zoom in/out with the pinch gesture, that'd be lovely thanks).

The problem comes when trying to apply the mobile phone style UI to a real computer with a keyboard, large screen, etc. That's just awful, but it appears to be the route that UI designers are galloping down these days.


Interesting. I agree with you about being most comfortable with a desktop/keyboard interface, but have the exact opposite opinion regarding macs.

IMO, OSX is the perfect platform for a keyboard-driven power user. It's unix/BSD based, so software works mostly the way you want it to, but unlike Linux it "just works" without endless fiddling. I don't use the OS UI much at all: Spotlight lets me open any app with a few keystrokes. All my time is spent in the terminal or the browser.


I've done almost no fiddling on NixOS in the last 7 years. People fiddle on Linux because they like to fiddle. My experience is it absolutely Just Works. By contrast, I've had OSX at work delete my data after one update and corrupt its install after another.

This is especially important in industrial settings. If a machine operator makes a mistake it's not just expensive, it can cost lifes. There where instances where operators actively fed fuel into fires because they misunderstood the situation displayed on the HMI.

Some time ago I found a really nice presentation about the ISA 101 standard covering this topic. The basic idea is: The HMI looks boring everthing is okay, if something goes into a dangerous direction colors and other elements are used to draw your attention.



> there are hundreds of scientific papers about collective intelligence, how to make teams online more effective, how to socialize newcomers to online sites,

I'm curious what research there is about how to create better-socialized groups of people in general; obviously some cultures are more successful in certain areas than others, despite starting with basically the same human genetics--is there any evidence that a culture can learn/adapt in intentional pro-social ways? How does a society learn to be less corrupt over time? How do people decide to stop littering/speeding/parking illegally? How does a society develop a respect for their environment, for their neighbors, for future generations, etc.?


This is a really great question, and well beyond my areas of expertise. What I can point you to is this excellent book by my colleague Bob Kraut and several of his colleagues, entitled Building Successful Online Communities: Evidence-Based Social Design. It summarizes a lot of empirical research into design claims, about how to socialize newcomers, increasing contributions, quality of contributions, and more.

https://direct.mit.edu/books/monograph/2912/Building-Success...

You might also look into research on pro-social behaviors. https://en.wikipedia.org/wiki/Prosocial_behavior

One of my favorite books that I learned about from my colleagues is Influence by Robert Cialdini. It looks at how to use known social influence tactics to change people's behaviors. Ideally, these would be used for things that society widely regards as positive (e.g. less littering), though these have also been used for phishing attacks and other dark patterns.

https://en.wikipedia.org/wiki/Robert_Cialdini


It's really telling how it's much easier to progress when things that you are working on are directly measurable rather than self-reported or estimated through proxies.

Also progress in any science is contingent on progress in technology. There's only so much you can figure out before you'll need new, more precise way of measuring things to go on any further.


> perceptual psychology research in AR and VR too

That sounds interesting, would you mind sharing where would you point me if I wanted to follow up with latest research?

Something like arxiv but for psychology? Unless it' only in the magazines ("Psychology today")? I'd be happy to hear the magazines names too, if you'd be so keen to share.

Thank you very much!


I'm not an expert in AR and VR, but I can point you to papers by two of my colleagues who know a lot about this space.

David Lindlbauer is a faculty at CMU who applies a lot of perceptual psych to his research on VR. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C39&q=dav...

Roberta Klatzky is a perceptual psychologist that has done a lot of work on haptics. One of her ongoing projects is augmented cognition through wearables, e.g. giving people instructions in heads up displays based on the current state of things (e.g. it looks like you successfully removed the lug nuts, here's your next step in changing the car tire). https://scholar.google.com/scholar?hl=en&as_sdt=0%2C39&q=rob...


Thank you! Checking!

In the broader category of cognition, I think we understand a bit better how people rationalize their decisions. How many things we do almost entirely on pure reflex and then manufacture a story that explains it without sounding crazy or just saying “I don’t know.”

My suspicion is that in these areas, of "nuggests of knowledge" psychology studies kinda work well and can be applied piecewise to it.

But I feel that anyone that think psychology will be fully predictable, or even up to the standards of medicine today will be for a disappointment

(but oh well, they can still run their experiments on Grad Students or Amazon MK workers and get another grant)


And medicine is like "we don't know why this medicine work, just that it mostly does", so it is not much of a standard.

Yes, I agree with the posts arguments.

One HUGE thing it's missing, though, is the deliberate hacking of results to reach statistical significance. I'm willing to bet that the results of a majority of psychology studies are not reproducible.

In another lifetime, I worked as a research assistant at a very large, well-funded, Ivy League psychology lab. Talk about p-hacking. Our PI would go so far as to deny potential candidates entry into our study, as well as the therapy, simply because the PI thought these candidates wouldn't help the therapy our PI developed look good in our study. Note, these candidates did meet all our OFFICIAL study criteria for entry into the study.


"I'm willing to bet that the results of a majority of psychology studies are not reproducible"

Indeed

> Study replication rates were 23% for the Journal of Personality and Social Psychology, 48% for Journal of Experimental Psychology: Learning, Memory, and Cognition, and 38% for Psychological Science. Studies in the field of cognitive psychology had a higher replication rate (50%) than studies in the field of social psychology (25%).

https://en.wikipedia.org/wiki/Replication_crisis


That is appalling, imho you might as well call an area of study, that has less than 50% reproducibility for studies published in “credible” journals, a pseudoscience.

I do not even think 50% is that good either and should be lumped in with pseudoscience as well.

The author talked about it in length in an other blog post. https://www.experimental-history.com/p/im-so-sorry-for-psych...

This is basically just scientific fraud, no?

There probably should be room in some of the social sciences for flexibility like this as long as it's called out right at the top as part of the experiment design so that the reader knows this is exploratory initial research being done for directional purposes - and that's it.

Unfortunately as the movement from History, Philosophy, and the other liberal arts disciplines became 'sciencified', the ability to deliberate on something rigorous but still with enough room to explore has been sacrificed in favor of trying to be more like the physical sciences.


I personally lean to yes, but that's more about what people do with the results than the results themselves.

Here's an infamous example: https://en.wikipedia.org/wiki/Milgram_experiment#Validity

Honestly after reading that it seems impossible to really conclude anything...as it's just full of conflicting results...is that innately fraud? No but certainly careers/$ have been made from biased/agenda-driven interpretations which seems fraudulent.


It's also how most empirical science operates.

If someone collects data and the study outcome is not preregistered, you can assume p-hacking. It would be implausible not to. And in most fields, preregistration is not common. (And even if there's preregistration, regularly people just switch their outcomes, and nobody cares.)

And to play the devil's advocate: psychology is probably doing better these days than most other fields, because it's been the posterchild example of the replication crisis.


i think that before a science can be "a science" with powerful theories and universal laws, there needs to be a long period of existing as a proto-science where people aren't doing experiments and are just observing and describing.

before darwin, you had to have linneaus just describing and cataloging animals.; before {astronomy theory guy}, you had to have {people just tracking and observing stars}.

psychology may have tried to jump the gun a bit by attempting to become theoretical before there were a few generations of folks sitting around quantifying and classifying human behavior.

this was definitely true in cognitive neuroscience. once folks got their hands on fMRI, this entire genre of research popped up that was "replicate an existing psychology study in the scanner to confirm that they used their brain". imo, a lot more was learned by groups that stepped back from theory and just started collecting data and discovering "resting state networks" in the brain.


I suspect that after 400 years of the scientific method, that we may be reaching the limits of single variable experiments in a number of fields. Statistical methods can find those patterns, and as we advance in those areas I expect us to advance in messy sciences like psychology. We’ll be able to more reliably look at people or other chaotic systems and see how three inputs work together to create a single effect.

The math for multivariate experiments has been well understood and applied since the early 20th century.

Modern industry would not exist without it.

The problem with psychology experiments is that the mind has many hidden variables which cannot be easily accounted for.


Moreover hard science is more or less based on at least three things: objectivity, quantifiability, and empiricism. We suppose that objective phenomena exist and can be independently verified and confirmed by other observers, using quantifiable measures to agree upon exactly what it is we are observing when conducting an experiment.

The issue is that the mind is by nature subjective and wholly private, inaccessible to outside observers. Further, whereas when conducting an experiment in physics where it becomes possible to separate the experimenter from the experiment (to an extent, observer effect and quantum mechanics notwithstanding), as regards matters of the mind the observer and the observed are one and the same of necessity; you are in fact part of the experiment yourself.

> The problem with psychology experiments is that the mind has many hidden variables which cannot be easily accounted for.

This is where the pillar of quantifiability breaks down, and barring advances in techniques for inspecting the brain with greater spatial and temporal resolution it's hard to see how one can quantify what cannot be directly observed.

Empiricism is the only aspect of the three aforementioned that can still hold up within this domain, and there is a large centuries' long tradition of studying the mind subjectively, qualitatively (as opposed to quantitatively), and yet in a still empirical fashion with falsifiable hypotheses; various mystical and meditative practices exist in this space, such as the fire kasina practice, especially as formulated by a trauma physician [0] complete with falsifiable hypotheses and steps for independent reproduction. What distinguishes practices in this space from harder sciences is that the phenomena observed are subjective, not "out in the physical world," and there does not exist any instrumentation for measuring them apart from that of your own faculties of perception.

As an aside I found the comparison to shamans and spiritual teachers in TFA interesting, since I have always considered psychology and spirituality to concern themselves with the same subject matter and problem domain of minds; one could say that they are both proto-sciences of mind, and psychology is the latest iteration of this tradition (though as TFA states somewhat misguided and out of touch with its roots in mysticism; on this latter point see Jung's Psychology and Alchemy for a discussion of where analytical psychology connects with the older esoteric traditions).

[0] https://firekasina.org/wp-content/uploads/2017/11/the-fire-k...


This is a misunderstanding. The the mind is not wholly private and subjective, its objectively quantifiable. Dualism ended a long time ago, and psychology is a materialistic science.

> The issue is that the mind is by nature subjective and wholly private, inaccessible to outside observers.

Not by nature, it's not. Unless you define it to have some immeasurable spiritual quality to it which is obviously not experimentally discoverable and so of little use discussing here.

> This is where the pillar of quantifiability breaks down, and barring advances in techniques for inspecting the brain with greater spatial and temporal resolution it's hard to see how one can quantify what cannot be directly observed.

Pretty much every experiment has a massive amount of relevant states which we cannot quantify.

There's a whole lot of quarks in 1 kg of steel but we don't need to know all of their states to measure macro quantities like its temperature and strength.

The mind has proven very reluctant to this sort of useful and measurable macro properties.

It's a typical case of a chaotic system. Perhaps innovations in the modeling of such complex systems (not too different from the advancements we're seeing in ML) will be the key to better insights into the mind.


> Not by nature, it's not. Unless you define it to have some immeasurable spiritual quality to it which is obviously not experimentally discoverable and so of little use discussing here.

There is no need to bring in "spirituality" (at least, as commonly and colloquially understood) or "souls" into the picture - see for instance qualia, whose existence is self-evident, and yet are also not amenable to examination by external observers. Irrespective of how much you can pick and probe at the brain and measure wavelengths of light, there is "something it is like" to actually experience the phenomenon of "red", and this experience itself is not readily accessible to objective/quantitative methods.

> It's a typical case of a chaotic system. Perhaps innovations in the modeling of such complex systems (not too different from the advancements we're seeing in ML) will be the key to better insights into the mind.

I could be convinced that minds are ultimately emergent phenomena of plain physical and mechanical processes which are too frighteningly complicated for us to analyze with contemporary methods.


> is not readily accessible to objective/quantitative methods.

Just because the mind is not readily quantifiable with current technology doesn't mean that it's subjective "by nature".

> for instance qualia, whose existence is self-evident,

No, it isn't and I see no way to test for "qualia" so, applying Newton's flaming laser sword, it's not worthy of debate.

It's a very dangerous thing to draw conclusions about empirical phenomenon from metaphysics. I suggest you stick to the scientific method when trying to understand the empirical world, it has been far more successful than philosophical rambling.


> No, it isn't and I see no way to test for "qualia"

You are verifying the existence of qualia every instant of your existence. It's the most immediately apparent empirical fact conceivable, since it is sensory experience itself, and you are testing its presence by the mere fact of being alive (philosophical zombies [0] notwithstanding).

> Before she left her room, she only knew the objective, physical basis of those subjective qualities, their causes and effects, and various relations of similarity and difference. She had no knowledge of the subjective qualities in themselves. [0]

I suggest you confirm the definitions and senses of terms you criticize before being so dismissive of them.

[0] https://plato.stanford.edu/entries/qualia/#Irreducible


The experience of qualia doesn't define any test. By what metrics would we even test it against? Are qualia orderable in any way? Do they have weight, a size, or substance? Do processes of material 'exist' physically in any meaningful sense?

We have not devised a test for others' qualia, and the only evidence we have of qualia is our own experience, which is a model of objective reality and not reality itself.

Bringing up philosophical zombies: we can't know if everyone inhabits their own universe, independent of each other, filled with zombie replicas. From anyone's perspective: that scenario would be 100% indistinguishable from what Occom's razor suggests. It therefore follows the qualia do not exert any sort of physical presence.

There are nonphysical things that we consider to exist. Numbers being the prime example (har de har har). Numbers do not physically exist, but are a property we impose on various groups (of which we distinguish in our own mind). Similar processes like experience do not 'physically exist' but is a property of physical existence (of which, we politely assume of other people to possess, and not call them zombies).


This may be true, but what does it have to with psychology as a science? How psychology deals with meta cognition (thinking about thinking) is correlate it with objective measures related to the phenomenon, (performance accuracy, changes in BOLD response in task fMRI. We don't care if its qualia, or just reports of experience. that is a different debate and field entirely.

There's a feedback loop between technology and science. Without progress in science there can't be progress in technology. But slso without progress in technology there can't be progress in science.

In applied science they call this the ladder. It’s the same reason some people are desperate to keep certain sorts of manufacturing in country. Once you lose that mutualism you can’t just spin up a new factory. You have to do what every second world country on the road to being a first world manufacturing superpower has done: you make cheap shit until you can make mediocre products until you can make good ones until you can make high end items.

partially disagree with this, every proto-science historically had a bunch of wrong but highly sophisticated theories. medicine, alchemy (as mentioned in the article), physics, biology (Aristotle), astronomy. for some reason it seems you need the wrong theories to organize the empirical data.

I actually think Freud’s elaborate mental structures have some of this feeling to them.


  > for some reason it seems you need the wrong theories to organize the empirical data
There's a somewhat well known article on this by Isaac Asimov: the Relativity of Wrong

The scientific process is really misunderstood. People think you use it to find truth, but actually you use it to reject falsehoods. The consequence of this is that you narrow in on the truth so your goals look identical, but the distinction does matter (at least if you want to understand why that happens and why it's okay that science is wrong many times -- in fact, it's always wrong, but it gets less wrong (I'm certain there's a connection to that website and this well known saying).

He's well known for his Sci-Fi but he got a PhD in chemistry, taught himself astrophysics, and even published in the area. He even had written physics texts. I found Understanding Physics quite enjoyable when I was younger but yeah, it isn't the same level of complexity I saw while getting my degree, but it's not aimed at University students.

Anyways, I'm just saying, he's speaking as an insider and I do think this is something a lot more people should read.

https://hermiene.net/essays-trans/relativity_of_wrong.html

I believe there's a copy of Understanding Physics here but currently offline: https://archive.org/details/asimov-understanding-physics


i think that astronomy/physics/gravitation is actually a pretty weird case in the history of science and most things don’t go that cleanly.

better examples might be medicine, where people just bounced around from one insane wrong theoretical system to another (humors to “empiricism” to bad air to Paracelsus to Avicenna and round and round and round). but somehow progress happened anyway. actual steady scientific progress only took off in the 19th century.

or chemistry, where alchemical theories were also completely bizarre, mostly mysticism and poetry. but despite being “not even wrong”, people following them became pretty good at laboratory skills

After many centuries, this laid the ground for empirical chemistry, and after a few more centuries, a theoretical system emerged that is close to right. But there was a lot of progress even under the “not even wrong” theories.


I'm not sure this is quite right and really is counter to the "less wrong" progress. I know physics history best so that's what I used.

My history of chemistry is a bit better than medicine, but I think you have to be careful with chemistry because much of the mysticism was coded language to guard their secrets. Maybe it shouldn't be as surprising that chemists are still a bit more secretive of their work than say physicists. And also we need to make sure we disassociate modern alchemy from classic alchemy, just like we would need to do with astrology with respect to astronomy. When there is a branching and one branch has the legitimate work it should be no surprise that the other branch becomes even crazier.

We also have the benefit of distance with physics and astronomy because not nearly as much was recorded, especially with respect to what everyday people believed. There were far fewer professionals in those fields (almost exclusively the rich, because you had to be rich to spend your time studying).

For medicine, I think you are being a bit too critical. Yes, "bad air" (miasma) was a bad theory, but did that theory lead to improved outcomes? Yes! While the underlying mechanism was incorrect, but that model caused people to clean up the cities to smell better and I'm sure you can see how that would result in decreased illnesses. It caused hospitals to be well ventilated, and in some cases it even caused doctors to clean their hands (when they smelled). Many parts of the belief were testable and could be confirmed through experiments because smell is a confounding variable. It led to some doctors wearing masks because they wanted to put better smelling things in front of their faces and well... the mask helped even if the herbs didn't (some even did). But it is also true that there were falsifiable experiments, which people did perform, that did not fit the model. This is *exactly* what led to people discovering germ theory. There had to be an explanation for why those failed, right?

But it would be naive to ignore the fact that the reason this belief persisted for so long was not just due to lack of scientific experimentation or lack of observation (both happened) because odor is caused by particles. The problem is that odor particles are way smaller than germs and so you get a ton of false positives and that not all germs are associated with things that smell. But the two definitely correlate in many ways, including being able to pass through air and liquids.

You question "how progress happened anyways?" Well it happened because they were not too far off course. It is because our view of history is often very limited and for most people it is a brief summary of what we know, so we do not see how many people were pushing back against the status quo. But I promise you, it wasn't just Semmelweis vs the world (no matter how poorly he was mistreated). Our view of history is warped in ways that both over exaggerate things as well as gravely under represent. These are not in contention, they are just different forms of noisy and biased information.

So I do not think medicine or chemistry (which we could discuss similarly. And I will even mention that lead can be transmuted to gold, but to do that requires an understanding that wouldn't be discovered till Dalton and the actual capability to do so wouldn't until very recently) run counter to my point. I do also encourage you to read the wiki on the history of germ theory (it even quotes Martin Luther discussing washing his hands, a full 300 years before Semmelweis). While it is concise, I think you can see that there was lots of pushback to disease just being from bad air well back into history. Like many summaries, it captures the main events but misses much of the smaller progress in between. It shouldn't be surprising as you might guess that would require far more text.

  https://en.wikipedia.org/wiki/Germ_theory_of_disease

i think we kind of agree with each other. doctors based on experience found things that worked better at treating patients. and they also had completely wrong theoretical models to rationalize these things.

i don't really see monotonic progress: e.g. going from Avicenna to Paracelsus seems like a lateral move at best as far as scientific knowledge (and both theoretical systems stuck around competing for hundreds more years), even though Paracelsus and his followers did make some important contributions to medicine in practice and medicine improved over that period.

so you had practical progress for a long time but scientific knowledge, meaning theories of nature, stayed equally bad.


  > doctors based on experience found things that worked better at treating patients. and they also had completely wrong theoretical models to rationalize these things.
And in truth this is a lot of what Asimov was talking about. Because it is how accurate the theoretical model is.

I'll often say that "truth doesn't exist" but I'm careful about my context. Truth doesn't exist because it requires infinite information. But that doesn't mean we can't trend towards it. This statement is no different than "all models are wrong, but some models are useful."

This is why I talk about science not actually being able to prove things, but rather about ruling things out. That's what is important. The model is always wrong but is it improving? If it is having a better total outcome and able to explain more than the previous model, then yes. That's the thing. It has to be able to explain what the last model does, and more. If it can't do both those things, then it isn't progress (yet at least).

And I think our only distinction of scientific progress here is that you might need to consider that this is a high dimensional problem, with many basis vectors. What science is, is what is observable and testable. I jokingly call physics "the subset of mathematics that reflects the observable world." (This is also a slight at String Theory...)


Astronomy: you might go with Galileo Galilei, and you wouldn't be too wrong. [0]

[0]: https://en.wikipedia.org/wiki/Galileo_Galilei


  > there needs to be a long period of existing as a proto-science where people aren't doing experiments and are just observing and describing.
I think you misunderstand science.

  > before darwin
And this strengthens my confidence.

There was an understanding of natural selection even back to antiquity. How could there not be? Did people not tame the animals and plants? These are experiments, and they saw the results.

There were great contributions to astronomy long before Kepler. There were many experiments that influenced the whole field. There was a lot of important chemistry that happened long before Lavoisier (conservation of mass) and Dalton (atomic model).

The proto-sciences are nothing to scoff at. They aren't useless and they weren't ill-founded. They were just... noisy (and science is naturally a noisy process, so I mean *NOISY*). There's nothing inheriently wrong with that. The only thing wrong is not recognizing the noise and placing unfounded confidence in results. That famous conversation between Dyson and Fermi discussing von Neumann's elephant wasn't saying that Dyson didn't do hard work or that the work he did had no utility, it was that you can't place confidence in a model derived from empirical results without a strong underlying theory. You'd never get to that if you only observed because you'd only end up making the same error Dyson did.

Science, in its nature, is not about answers, it is about confidence in a model that approximates answers. These two things look identical but truth is unobtainable, there is always an epsilon bound. So it is about that epsilon! Your confidence! So experiments that don't yield high confidence results aren't useless, but they are rather just the beginning. They give direction to explore. Because hey, if I'm looking for elephants I'd rather start looking where someone says they saw a big crazy monster than randomly pick a point on the globe. But I'm also not going to claim elephants exist just because I heard someone talking about something vaguely matching the description. And this is naturally how it works. We're exploring into the unknown. You gotta follow hunches and rumors, because it is better than nothing. But you won't get anywhere from observation alone. Not to mention that it is incredibly easy to be deceived by your observations. You will find this story ring true countless times in the history of science. But better models always prevail because we challenge the status quo and we take risks. But the nature of it is that it is risky (noisy). There's nothing wrong with that. You just have to admit it.


  > There was an understanding of natural selection even back to antiquity. How could there not be? Did people not tame the animals and plants? These are experiments, and they saw the results.
Isn't this OP's point, though? People saw results, and even worked with what they saw, but underlying theories were all over the place and it wasn't until the time of Mendel that we started to have even the most rudimentary sense of rigor or scientific method when it came to the field that we now know as genetics. And the contention is that what came before Darwin and Mendel wouldn't stand up as rigorous science in our eyes, but was nevertheless the crucial foundation for what became the field of genetics.

In a way yes, but I'm saying their proposal of how to handle the situation is too strong: don't do experiments

  >>> there needs to be a long period ... where people aren't doing experiments and are just observing and describing.
I strongly disagree with this because observation isn't enough. You have to experiment.

Yes, it's fuzzy. But embrace the fuzziness. Acknowledge it. The truth is that observation isn't enough. You can NEVER discover truth from observation alone. Science doesn't work without interaction. There's three classes of casual structure: correlation, intervention, and counterfactual. We know the memes about the first, but the other two require participation. You'll get lucky and have some "natural experiments" but this is extremely limited. What I'm saying is that we can work with these issues without tying our hands behind our backs and shooting ourselves in the foot. Stuff being hard is no reason to handicap ourselves. I'm arguing that only makes it more difficult lol

I think one of the major issues is that we (scientists) fear that my openly discussing limitations and admitting that we don't have high (statistical) confidence will result in people not taking us seriously. And in many ways this is a reasonable response. I'm sure many scientists, myself included, have annoyingly found that an honest limitations section ends up just being ammunition for reviewers to reject the work. A criticism my advisor has given me is that I'm "too honest". Maybe he's right, but idk, I think that thinking is wrong. Because science is about ruling things out, not proving results (you can effectively achieve the latter by doing the former but you can't directly do the latter). And the younger the field is (e.g. my field of ML is VERY young), the noisier the results are.

Personally, I'd rather live in a world where we're overly open about limitations than not. We're adults and can recognize that's the reality, right? Because papers are communication from one expert to others? (And not to the general public, though they can see) Because as I see it, the openness is just letting others know what areas should be explored.

Don't fear the noise, embrace it. It's always there, you can't get rid of it, so trying to hide it only makes more.


You must observe before doing experiments. Otherwise you would not know if you are asking the right questions. It takes a lot of observation before you can gather enough data to ask intelligent questions. you can ask questions before gathering 'enough' data... and those questions are useful... but you won't know which questions will lead you down a wrong path (alchemy) until you observe why those questions don't make sense (chemistry).

tl;dr: you both are right, and are talking past each other.


  > tl;dr: you both are right, and are talking past each other.
Then you misunderstand my pushback. I fully agree, you must do both, doing both was explicitly my point.

parpfish said

  >>>> there needs to be a long period [...] where people aren't doing experiments and are just observing and describing.
This is what I pushed back against because I agree with you, you need both. I understand my first comment was not as clear, but the one you replied to I think I made this apparent.

Observation and experimentation are tightly coupled. They drive each other. You need not start with one and you can also perform them in parallel. I think observation driving experimentation is clearer, but the reverse happens too (coupled). But you need to intervene, because experimentation is much like bug hunting in software. You need to find out where things differ, and those are always in the rare and unexpected events. If you just observe, you wouldn't know what to observe. You'd have oodles of nominal data, and very little about the edges. This is why it will lead to wrong conclusions. Interventions is about making the black swans appear in a more timely manner. You say "okay, it works with this case, but what happens when I change this one variable?"

Every scientist is intimately familiar with this because when you begin to experiment, you find that you need to start observing different things than what you initially expected. We can draw parallels to coding, where you have likely experienced that you cannot formulate what the actual code will be until you start to get your hands dirty. Sometimes it is similar to what you expected and you only need minor changes, but many times you discover you had naive assumptions and things need to change drastically. It is easy to believe only minor changes happen because the drastic changes are composed of many minor ones. If you don't believe me, start documenting or go look at commit histories.

I'll give an example of a real life event: infrared radiation! Herschel had the belief that different colors were associated with different temperatures (which is true) and the experiment he devised was one thermometer to measure the colors and two for controls. The story goes that he decided to place one thermometer just beyond red and discovered that it was the highest of them all, but truth is he probably just put one of his controls there because it is a natural place to place them saw a higher temperature and said "what the fuck?" (doesn't really matter which is true). This, accident or intentional, was intervention. *It was experimentation that led to observation.* This led him to devise more experiments to determine what was actually going on. You would never get here from observation because you can't observe infrared without specialized equipment (e.g. thermometers). You also won't get the right conclusion if your observations are just that thermometers are warmer around dark objects (a known fact Herschel used!) nor by observing thermometers reading higher temperatures near glass (lenses explain this). What are you going to observe? It is invisible!

I should also mention that at least a decade earlier Pictet was experimenting with mirrors and was even able to demonstrate the transmission of heat without the transmission of light! Hershel was probably well aware of Pictet's work. But what Herschel's work did was narrow down what was actually happening in Pictet's experiment. Because he logically understood that the prism was separating the colors in light and that this is done so in a continuous manner, so is doesn't seem unreasonable to think there could be "unseen light" beyond the red (that does fade in intensity to the eye). It may also be unsurprising that a year after Herschel's work that Ritter discovered ultraviolet light, but for this to be confirmed a thermometer wasn't really sensitive enough, but he could use silver chloride, since it was known that violet light caused a faster reaction and after knowing about invisible light beyond red, why not search for invisible light beyond violet?

It may again be easy to frame these as "observation first" but that's not accurate, they are coupled. Most certainly Ritter went hunting, and hunting for something that had not been observed. Because frankly, the observation was the proof. You'd never get there "from observation alone" (or rather not until you have a huge population and there are just enough random events and you have enough time. But even then, many things probably would not happen within the lifetime of the sun).


> There was an understanding of natural selection even back to antiquity. How could there not be? Did people not tame the animals and plants? These are experiments, and they saw the results.

No, people did not know natural selection before Darwin. He spent decades collecting and then analyzing data collected in Galapagos Islands before he made his breakthrough.

It's pure hindsight bias to think that you can go from "I bred the fattest chickens together, who made a fatter chicken" to "Humans evolved from apes who evolved from single-cellular organisms". For millennia, people from all cultures believed that God created humans from the void. In the absence of data, that's as good a guess as you can have. If Darwin concocted his theory of natural selection before he had his data, no one would have believed him. By dismissing the theory of natural selection as something that was "obvious" pre-Darwin you are dismissing his life's work.


  > No, people did not know natural selection before Darwin
https://en.wikipedia.org/wiki/Natural_selection

Read the history. The topics discussed were are not reminiscent of evolution because hindsight, they are similar because they are similar. Darwin himself references many of these. People knew of artificial selection because they did it. You don't breed animals and plants without some basic knowledge here.

I'm not saying Darwin's work wasn't important. It's critical. But this does not warrant dishonoring and forgetting all those who did work that led to that point. Their work wasn't as ground breaking and influential, but they still helped set the stage. Darwin's work didn't appear out of a vacuum.

Science doesn't happen in leaps and bounds.

  > By dismissing the theory of natural selection as something that was "obvious" pre-Darwin you are dismissing his life's work.
This is a grave misinterpreting of what I mean. You have made mountains out of the pebbles I described. It can both be true that scatterings of the ideas exist, with only circumstantial evidence for them, while also monumental work be accomplished to legitimize those ideas and fill in so many holes.

Your basis of what the average person believes is also irrelevant. Even still many reject evolution and so many more did even just a decade or two ago. It was national debate not even a century ago.


By forcing formal study of the mind into the constrained methods used for studying the physical world, it allows the government and profit/power seekers to be the only actors free to use the methods that work best.

I wonder if this is purely a coincidence.


What methods are appropriate but forbidden by this “forcing” (and what’s causing the forcing exactly?)

Methods that do not adhere to the scientific method.

The forcing is performed by culture. Think back to COVID, the effect was on full display in various forms during that spectacle. Or, pick most any war that gets substantial coverage in the media.

I'm unfairly picking on science here a bit, the problem is ideologies in general.


As in… what methods?

Psychological studies have to adhere to various controls and what not. "The method".

Put some smart people in charge of an ambitious psychedelic powered initiative...but they have no controls. Not science? Cannot be science?

It's somewhat like how hip hop influences and accelerates white cultural evolution. Do this idea, see what happens when I do it.


Can be science! Doesn’t have to be science! Science folks do exploratory studies all the time to get a sense of whether the tree is worth barking up before they do the hard work of proving it.

Psychedelic studies might not suit themselves to FDA gold standard double-blind clinical trial science, but science overall involves a really broad toolkit. Psychedelics aren’t the only kind of intervention that can’t be safely or effectively blinded, and there are methods that attempt to demonstrate an effect anyway.

It seems to be thanks to people trying stuff as well as rigorous scientific studies that psychedelics are having their present-day moment in the sun (thanks, MAPS et al!). (My favorite part was that time Alexandria Ocasio Cortez cosponsored a psychedelics bill with arch-conservative, retired Navy SEAL Dan Crenshaw)

Things can be valuable without science proving it—nobody’s coming after the local preacher or imam or rabbi or shaman demanding that they scientifically prove their ministerial insights. And lord help the busybody who comes at a retired Navy SEAL trying psychedelic therapy for their post-traumatic stress…

I wonder, though—does it seem to you that psychedelics should be industrialized? There’s a fine line between “legal” and “aggressively marketed by drug companies.”


I think they should be studied using the full spectrum of approaches and people available to us, not restricted (officially/rhetorically) to an ideologically constrained and confused discipline like science as it is (which is not the same as its scriptures teach).

Anyone is free to learn therapy skills and use them in their own lives, you're just not allowed to call it therapy.

We've learned that it hasn't produced much research that holds up to replication, that the vast majority of research never gets properly replicated at all anyway, and that despite the endless meta-analysis of glorified internet surveys people's mental health hasn't been improving.

We're certainly learning how to use psychology to manipulate people though. Advertising, dark patterns, propaganda, and behavioral conditioning just wouldn't be the same without psychology research. We're performing research on children to learn the youngest age they can recognize a brand name (age 3 last I checked) or how best to keep them hooked playing a video game/child casino though and that research is making companies money hand over fist.


> I recently read The Secrets of Alchemy by Lawrence Principe, which I loved, especially because he tries to replicate ancient alchemical recipes in his own lab. And sometimes he succeeds! For instance, he attempts to make the “sulfur of antimony” by following the instructions in The Triumphal Chariot of Antimony (Der Triumph-Wagen Antimonii), written by an alchemist named Basil Valentine sometime around the year 1600. At first, all Principe gets is a “dirty gray lump”. Then he realizes the recipe calls for “Hungarian antimony,” so instead of using pure lab-grade antimony, he literally orders some raw Eastern European ore, and suddenly the reaction works! It turns out the Hungarian dirt is special because it contains a bit of silicon dioxide, something Basil Valentine couldn’t have known.

> No wonder alchemists thought they were dealing with mysterious forces beyond the realm of human understanding. To them, that’s exactly what they were doing! If you don’t realize that your ore is lacking silicon dioxide—because you don’t even have the concept of silicon dioxide—then a reaction that worked one time might not work a second time, you’ll have no idea why that happened, and you’ll go nuts looking for explanations. Maybe Venus was in the wrong position? Maybe I didn’t approach my work with a pure enough heart? Or maybe my antimony was poisoned by a demon!

> An alchemist working in the year 1600 would have been justified in thinking that the physical world was too hopelessly complex to ever be understood—random, even. One day you get the sulfur of antimony, the next day you get a dirty gray lump, nobody knows why, and nobody will ever know why. And yet everything they did turned out to be governed by laws—laws that were discovered by humans, laws that are now taught in high school chemistry. Things seem random until you understand ‘em.

Well, this example doesn't just fail to support the argument, but undercuts it. Basil successfully identified the kind of antimony that would work, -despite- having no concept of sulfur dioxide. He did not write down something like "not all kinds of antimony work for this recipe, so get a bunch of different kinds and try them all" -- that, or a stronger version ("sometimes the recipe fails, we don't know why"), would support the author's point.

So we're left with the author trying to argue that this alchemist thought the world was "too hopelessly complex to ever be understood" on the basis of ... the alchemist correctly identifying the ingredient that would make the recipe work.


I’m floored by the suggestion that professional training as a therapist does not produce a statistically significant improvement in ability to treat mental health conditions.

It’s interesting that one comparison they offered was between advice from a random professor versus a session with a therapist. I can remember several helpful conversations with kind, older professors during difficult times. Maybe we should identify people whose life experiences naturally make them good counselors and encourage them to do more of it, instead of making young adults pay $200k for ineffective education and a stamp saying they can charge for therapy.


Speaking as a tenured professor of clinical psychology, this part kind of irked me a bit. It's not exactly false but it's a little misleading (like some other parts of the essay).

Lots to say about it but this is a finding that has been reported intermittently for decades. However, it's being spun a little misleadingly.

Note that the author says that untrained professors were selected for their ability to be warm and empathetic. It's not everyone (we all know not everyone is warm and empathetic), and even trainees learn very very early (like immediately in their first term) the basics of therapy. Not everyone is warm and empathetic, and people going into clinical psychology are sort of self-selected in their empathy to start with.

This research is kind of being taken out of context too. Wampold, one of the authors cited (who I have the greatest respect for) is very big on "nonspecific factors", meaning things like empathy, good social skills, and so forth. His studies in general tend to be focused not on "does training matter?" but "do specific therapy protocols matter, or is it about the clinician's social/relationship skills?"

If you want some kind of medical standards, you can't just say "oh it's ok, everyone can just be warm and empathetic". You have to train on it, grade it, hold it to some standard. Otherwise you get manipulative, self-serving therapists who do harm in the long run (the length of a study versus real settings is another issue).

Another issue is that many of these issues are not unique to psychology. In lots of medical scenarios it's been shown that the amount of training needed to competently do a wide variety of procedures is lower than current standards in the US require. Experienced clinicians in many fields have acquired biases that interfere with practice, young trainees are much more worried about performance and are more open-minded and so forth (on average a little; not trying to stereotype).

A huge, enormous volume of studies over many years have shown that therapy works compared to all sorts of placebos and controls; that some therapists are reliably better than others; but that what makes therapy "work" overall is not what protocol-driven therapies (CBT etc) assert. It's not so much that training isn't necessary, it's that the field has has been obsessed with scientific details that, although well-intended, don't matter, and healthcare in general is full of phenomena that we'd rather not admit.


Thanks for writing this, appreciate the point of view of someone who knows what they’re talking about. I guess my gripe is that the time-consuming and expensive training process isn’t able to reliably elevate a random young practitioner to the helpfulness level of “wise and patient professor who is offering their time to mentor and counsel even though it’s not in their job description”… but that is in fact a very high bar to hit.

It’s not surprising that some people are naturally good therapists just from a lifetime of observing people, and also not surprising that some of those people end up in teaching-focused academic jobs.

I guess you can train people to be empathetic if they’re motivated in the right ways but just lacking the skill. It makes sense that it’s a big part of counselor training.


"healthcare in general is full of phenomena that we'd rather not admit."

What kind of things are you talking about?


The obvious example would be the placebo effect. Drug/vaccine trials are the only area of science I can think of that are routinely held up as pinnacles of rationality, the outputs of which all Good Citizens must take on faith, and yet which attempt to control for some sort of paranormal force in which people can unconsciously fix their own bodies through psionic abilities.

This regularly happens and nobody seems the slightest bit curious about why. That is, when you think about it, totally mad. I'd suggest it's got to be either:

1. The only psychology mystery worth investigating, or

2. A sign that clinical trials are actually mostly junk science

Either would be hard to admit.


"some sort of paranormal force in which people can unconsciously fix their own bodies through psionic abilities."

I think you are being very dismissive there.

After all, the placebo effect happens in your body, which your mind inhabits. These are not two separate systems interacting by some magical means. This is one system with various subsystems that interact all the time.

Concrete mechanism of the placebo effect is still unknown, but the observation that your immune system can be influenced by your mental state isn't in itself magical. Our bodies respond to all sorts of mental states. We get red in face when embarassed, and sometimes must run to the toilet when scared etc.

All this "psionic" "paranormal" label stuff would only apply if someone could do it to other people without them even knowing.


Sure, but I think most people would place very hard limits on what you can do to just think yourself better. Your mind can't cure cancer, eliminate viruses, regrow arms etc. Yet we routinely design clinical trials as if it can. And ... nobody seems to think this is in any way strange!

Placebos have been studied. Most studies have found that, while placebos are effective for treating pain, they are mostly not effective outside of that, when compared to receiving no treatment at all.

Some people really do become spontaneously cured of diseases, yes - it happens all the time, and is something worth studying further. But the idea that the placebo effect is what causes it, is not really well-founded.


Clinical trials do use placebos all the time, even when studying things that don't involve pain.

Somewhat notoriously some trials give patients a placebo that's actually another drug, as otherwise people could unblind themselves by not suffering side effects. The COVID vaccine trials did this. It doesn't make sense if you think the placebo effect only influences perception of pain, as otherwise you could lose the placebo entirely and double your statistical power (compare against a synthetic control group).


As much as that’s an eye-catching headline, even the author admits it was a bad study that hasn’t been reproduced.

There was another research that indicated that the therapeutical framework that the therapist uses has no influence on the probability of postive outcome. What mattered more is whether a therapist was able to form a meaningful connection with a patient.

If the author told you their psychology study was reproducible, not dismissing it would be the other category of error.

Which I think is a critical aspect of the author's argument: the lack of replication.

There's a lot of factors at play. As I mentioned in my main comment, the study naturally has a metric fuckton of variables and noise which makes even basic experiments extremely difficult, but there are other factors like willingness to close that gap (is it surprising few want to climb a treacherous mountain?), as well as the whole structure of academia and the way metrics are formed. How do you create a foundation when you're not only not incentivized to replicate but actively dis-incentivized? How do you explore that mountain which certainly has many pitfalls and uncertain paths when you must publish frequently?

Those problems are not remotely isolated to psychology, but they have a huge and crazy difficult mountain to climb and should we be surprised that few try to climb it when attempting to do so is far more likely to lead to academic success rather than helping the field make their way up? Even if you don't fall off? It can take a long time to actually capitalize on those gains. I do think this is a conversation academia needs to have. Everything in place is logical and makes sense, they were done for good reasons, but I think we need to be honest that exploration is just a highly risky business. You'll never make it across the ocean, to the moon, or to other worlds if you are unwilling to lose a few people (researchers who never make any impact) or risk a have a few conmen (researchers who make shit up).Ironically, if you try too hard to prevent these from happening, you'll be doomed to only have them (you'll only explore just beyond the fence and you'll hear stories of what is imagined far beyond; either identical to just outside the fence or wild stories. But you'll never know until you go). We can inch our way out or we can be brave. I think unfortunately it is only explorers who can tell the brave from imaginative. It sucks, but is this not the nature of it? A story as old as stories are. But we wouldn't be where we are if we didn't engage in risky business.


It sounds to me like you are justifying a continuance of frauds that masquerade as scientific. The danger here is is not just direct effects but also what they displace. One can not actually do psychological studies beyond the test of chemical remedies anymore. The entire field is overrun with charlatans and 101 classes to make more of them because they will all be treated as valid enough to fund and they are all much more exciting than the scientific process.

Oh dear god no. You can check my comment history to see I rant about that frequently. I with in ML and dear god is there so much fraud.

What I arguing for is honesty. It's perfectly okay to do work where there's a high level of uncertainty *UNDER THE CONDITION THAT YOU ARE UPFRONT WITH THAT UNCERTAINTY.* Short of that is at best deceptive. I'll encourage you to look at my recent comment history because I've said a lot about that.

But what I'm trying to discuss in the previous comment is the systematic and structural components that encourage this deceptive and fraudulent behavior. Because if we don't recognize why it happens then we can't really solve it. Sure, we can hit a reset button and it'll go away, but it's we don't recognize the pressures that push the fields in these directions then it is just going to happen again.

The reason I'm arguing that we have to take risks is because many of these pressures are created by naive metrics that attempt to measure things that are not realistically measurable (so of course the metrics get hacked. It's literally Goodhart's law in action). But there are so many issues at play. The lack of incentivization of replication means few people earnestly read the works of others. It means science doesn't actually happen because the whole fucking point is replication. People aren't even reading the papers of their peers in their own departments! Literally the person in the office next to them!

The whole academic structure is screwy. You train someone for years to become an expert researcher (grad student) and then immediately put them in a manager role (professor)? The fuck were we doing that training for? I get managers, but with so many admins in universities why are professors doing less and less research? Advising grad students isn't research as every grad student will tell you their professor doesn't get details.

I could go on. But my point is the system is fucked up and overrun by bureaucrats who are risk adverse. They'd rather rely on metrics they are unwilling to understand than acknowledge that they live in a world that is fuzzy and chaotic. That kind of blindness is for religion and conspiracies, not science.


> professional training as a therapist does not produce a statistically significant improvement in ability to treat mental health conditions.

It produces a statistically significant improvement, just not with people who are already gifted at it. You can get not gifted people and teach them to be not worse than gifted. It is not much, but it is not nothing either.


It seems that gifted people with zero trainig are just as good in this activity as people with years of formal training on average who self-selected to undergo that training. I'm not sure if that's true of any other activity.

Where did it say the education was ineffective? There are reasons to believe it is not the only path to being effective at helping others, but that does not invalidate that if you spend a few years learning tools and techniques and pattern matching to behaviors, you have a valid toolkit in front of you for being a therapist.

Now, it is a valid argument whether or not it should be required (and there is no requirement to label yourself as a "coach"), and the price tag on it is of course always a consideration. But being dismissive of higher education is just as silly as being overly dependent on it.


Part of the problem is the therapists (and medical practitioners in general) are often forbidden from doing the thing they were trained to do for a variety of reasons: risk and liability, patient turnaround, standardization. These things can get in the way of doing the right thing in the times where that is known. That’s before considering the ambiguous cases.

> often forbidden from doing the thing they were trained to do for a variety of reasons

you forgot to add `insurance company rules` to your list.


Can you give some examples?

Want to prescribe medication because you think it is the best treatment -> insurance company says no

This happens literally millions of times per day.


Therapists do no prescribe medication, because they are not doctors. You need a psychiatrist or other doctor for that.

GP says “and medical practitioners in general.”

I think a lot of people just never find the right therapist and then assume all therapists are terrible.

It’s interesting because even the most staunch opponents of mental health talk therapy have people in their life they talk to, they just don’t consider them therapists.


Well, sure, but "people in their life that they talk to" aren't really therapists. They're functioning quite differently - they can have a personal involvement that a therapist, ethically, isn't permitted to have. The sorts of things someone talks to with their friends overlaps with but is also often quite distinct from the sort of thing a therapist is probing for. There's no direct financial incentive to keep the "patient" coming. And they're making no claim to, broadly, help someone improve their overall mental health - people vent to their friends because it feels nice, not because it's necessarily constructive.

I promise you no therapist who is not engaging in illegal behavior cares about you coming back. This is pure delusion.

Although I agree it's a matter of finding the right therapist, I think that undersells the problem a fair bit.

There are large barriers to trialing a lot of therapists, and finding the right one can be like finding a needle in a haystack. Therapy is quite expensive, and many therapists already have a full caseload. And the pool of therapists is very homogeneous: essentially, a ton of well-off white women who might not have the tools or shared experiences to facilitate a helpful therapeutic alliance with individuals coming from a broader background than they're comfortable with.


But this begs the same question: if mental illness really is what psychologists say it is, and if treatment is a learnable skill, then the practitioner shouldn't matter that much assuming his training was good.

But most evidence suggests that some "je ne sais quoi" has to exist in the therapeutic relationship.

In other words, Freud was right about Transference as a necessary ingredient to psychotherapy (and probably about a lot else that is still too controversial to talk about or pass IRB muster).


Isn't the "je ne sais quoi" just feeling safe to be themselves and open? Whatever that means for each person

In my experience, most staunch opponents of mental health talk therapy are people who have serious issues and really do not want them to be talked about and fixed. Issues like bad anger management when they want to keep the anger out of anger, eating disorder which makes you not want to heal, because you might get cured and fat.

There is such a thing as being unhappy about actual therapy that did nothing or harmed you. But you see the staunch opponents who never been at therapy and have only movie understanding of it having tons of strong opinions or fear.


So the patient is holding the therapist wrong?

"Just one more therapist bro" is what defenders of modern psychology use. It is always your fault the therapist didn't work out. Always your fault you aren't trying hard enough. There can never be systemic issues.

There was another research that indicated that the therapeutical framework that the therapist uses has no influence on the probability of postive outcome. What mattered more is whether a therapist was able to form a meaningful connection with a patient.

>epicycles all the way down

I don't mind this idea at all! I'm the abyss staring into itself.

That said I don't think digging into skulls until we identify the neurons that cause the big sad or teaching people ways to cope with their awful lives is worth much. I want psychology to help me understand (a maybe terrible) existence, not to solve it. Something like overturning our intuitions is perfect. If tomorrow they make a flawless anti-depressant that will let me endure misery I argue we'll be worse off.


> There’s a thought that’s haunted me for years: we’re doing all this research in psychology, but are we learning anything?

Advancements in PTSD, dissociation, treatment resistant depression and attachment disorders is astounding. We know a lot more about how people work.

Psychology has always been a person centered field - humans are complex, and what it does is more akin to QA than coding. It’s individualized. It doesn’t love studies because the underlying mechanism or traumas can be different even for people who went through the same things.

Unfortunately advancements are not evenly distributed. There is an army of CBT therapists who work in one method that works for some but not the majority. Finding a practitioner is a crapshoot even when looking for specialists.

The DSM is functionally treated as a billing manual, and to be paid practitioners need to jump through a long series of hoops. The medical billing side can’t deal with the complexity.

All these aside, there are people who are really truly healing in ways they wouldn’t without the field. There are ideas that propagate through human culture make human behavior more understandable.


Given that “humans are complex” and “it’s individualized”, would advancements be greater and faster by just allowing clinicians and scientists to just talk things out instead of coming up with “studies” which pretend to be “science” with a low reproducibility rate (and non-publishing on null results)?

You may be looking for qualitative data and reporting which is on the rise!

The term "evidence based" is bandied about all the time because insurance companies don't want to cover treatments that aren't considered standard. The problem is everyone is different on some level, and we often don't have the resources to get to the root of any problems. So treatments that may work extremely effectively for some may be thrown out because they don't work effectively for everyone, and can be contraindicated. Somatic therapists especially have to deal with this. Effective treatments are often outside of the "evidence based" tests, which can be based entirely around showing symptom improvement. This creates a catch-22 where if you lessen the restrictions you get a lot of crackpot providers, where if you keep them tight you keep people from being able to access treatments that may work well for them.

There's also competing models for mental problems and approaches - the psychiatric model is similar to a doctor giving treatment for an illness. They tend to have a belief in biological determinism, IE if a parent had an illness then its likely you will have one too. The Biopsychosocial model is a little bit more holistic around the experiences of people and their physical environment and upbringing. The Trauma model is one I personally ascribe more to which conceptualizes mental health problems as understandable reactions to traumatic events that are conditioned within us.

There are a lot of people who get real relief from outside the mainstream providers, and there are a lot of people for whom the standard providers have not been able to help. I think that is part of why there's so much activity around finding better models right now.


I think Psychology is really interesting - what could be more interesting than studying why humans behave the way we do.

For a while in undergrad I was a double math and psychology major. I spent a semester doing undergraduate research in a psychology lab where I would take people in to do be subjects in the experiment and then write them a check afterwards for participating. During the experiment they'd listen to one syllable sounds some from a english and some not from english and the experiment tested whether the subjects were better at remembering the one syllable sounds from the english language when played one syllable sounds back after listening to the first set.

As I type this I think it's an interesting experiment, but it felt to me that the interesting questions in Psychology need to be so dumbed down to be able to run an experiment to test any hypothesis that what's actually interesting about Psychology get's lost in the weeds of trying to rigorously so the scientific method. I don't know a solution to this or whether it's even a problem, but it's problem endemic to the question of whether we're actually making progress in psychology. For the record, I do think we're making progress in Psychology.


> The best thing to do is forget all of it, estrange yourself from the word “creativity” entirely, and start with the extremely bizarre fact that humans write songs and novels and solve math problems, and we don't know how this happens.

(Found in note [10] in the article.)

This reads, very much in a positive way, like someone is describing the idea of "root cause analysis". That bodes well for this person to epistemicly "know" stuff like they write about. At least they'll be more likely to "know that they don't know" yet, which is a necessary step along the way.

It reminds me of a saying I've heard: "Forget what you know." ("Forget" is even in the quote. I wouldn't be surprised if the author is familiar with the saying.) Perhaps more clearly, "Forget what you think you know." The idea being for one to identify and challenge their assumptions in order to work it out from "first principles".


> Scientific revolutions arise from crises—that moment when we’ve piled up too much stuff that doesn’t make sense and the dam finally breaks, washing away our old theories and giving us space to build new ones

I can't wait for this to happen on our understanding of a Big Bang because status quo explanation relies on very precise math of things that are thought to happen microseconds after supposed start of existence while our earliest observation is (and can only be) from 380000 years later and new observations with new, more precise instruments seem often to rather contradict cosmological predictions than confirm them.


I suppose that's why some categorize psychology as a "soft" science?

To me psychology strikes me as more of a religion, albeit a type of secular religion. An inter-dialogue experiment most of us are doing all the time.

When the OP mentioned "folk" science, I thought he'd start talking about folk stories... which actually I think in the realm of psychology would start it getting closer to who we are and how we collectively participate in the world.


I have a solid example here that bogles my mind every now and them watching people killing other people especially in the US: I would expect after all those years mental health to be accounted in a serious criminal case like killing somebody. Meaning that a person who kills somebody else definitely has mental issues that come from their childhood. So what about parents in those cases, aren't they having their part on the sick mentality of their child? Why not pressing charges to them?

> Meaning that a person who kills somebody else definitely has mental issues that come from their childhood. So what about parents in those cases, aren't they having their part on the sick mentality of their child? Why not pressing charges to them?

Even assuming the premise of killing someone implies mental illness, and assuming the mental illness stems from trauma, there's a pretty large leap in reasoning here. Why must the trauma come during childhood? Why not in adulthood? Even in childhood, why does it have to come from the parents?

Then you have the idea of continuing up the causal chain. Why are we pressing charges to the parents? If the parent's traumatized their kids, there's good chance they did it due to their own mental illness/trauma, which means the parents themselves were abused in childhood. So we should go after their parents.... except that just means we should go after there parents.... ad infinitum.


> a person who kills somebody else definitely has mental issues that come from their childhood

People that kill other people often function well in their society. It doesn't make sense to me to classify that as mental issues. People are inherently territorial, aggressive animals - at least to the extent being so doesn't make them much of an outlier.


> Meaning that a person who kills somebody else definitely has mental issues that come from their childhood.

Is that even true?


(I'm about to go to sleep, so I'm not going to stick around and argue. I've been on this site for --good god-- ten years next month, you can check my comment history and judge for yourself whether you think I'm full of baloney or not. This is just painful, I have to say something.)

For the love of all that's good and noble please do science to Neurolinguistic Programming please.


https://en.wikipedia.org/wiki/List_of_cognitive_biases

I think the science allows people to see what the author is attempting fairly clearly. =3


"Exact science, Mr Angier, is not an exact science."

David Bowie (as Nikola Tesla) in The Prestige


Oo my favorite topic! Great writing and the right themes are there, but I think they’re missing a lot by not taking a more historically-holistic view. Aka wondering what all the people who’ve been criticizing psychology think, from Chomsky to Piaget to Lacan to Freud to Husserl to Hegel to Kant to Locke to Scotus to Ibin-Sinna all the way back to the OG, Aristotle.

Obviously some were more empirical than others so you can’t believe them all, but without engaging with their works — even in a negative way - you’re forced to reinvent the wheel, like the bitcoin people did with banking regulations.

For example, this quote makes me feel the author thinks psychology is more special/unusual than it is:

  We’re in good company here, because this is how other fields got their start. Galileo spent a lot of time trying to overturn folk physics: “I know it seems like the Earth is standing still, but it’s actually moving.”
  
In what way has any natural science been anything other than overturning folk theories? What else could you possibly do with systematic thought other than contradict unsystematic thought?

In this case, this whole article is written from the assumption that true, proper, scientific psychology is exclusively the domain of the Behaviorists. This is a popular view among people who run empirical studies all day for obvious reasons (it’s way cheaper and easier to study behavior reliably), but those aren’t the only psychologists. Clinical psychology (therapy) is usually based in cognitive frameworks or psychoanalytical, pedagogy is largely indebted to the structuralism of Piaget, and sociology/anthropology have their own set of postmodern, Marxist, and other oddball influences.

All of those academies are definitely part of psychology IMO, and their achievements are undeniable!

For anyone who finds this interesting and wants to dunk on behaviorists with me, just google “Chomsky behaviorism” and select your fave content medium — he’s been beating this drum for over half a century, lol.


A great article, particularly for its candor.

As Neil DeGrasse Tyson and others have said, in the same way that chemistry replaced alchemy, neuroscience will replace psychology. But this isn't likely to happen soon -- the human brain is too complex for present-day efforts.

But there's some progress. In a recent breakthrough, we fully mapped the brain of a fruit fly (https://www.nature.com/articles/d41586-024-03190-y).


Its a piss poor article that was written to vent, but it didn't try very hard to find good psychological research. Episodic memory psychological literature is very strong, IMO, yet never gets brought up in these kinds of articles. Its always the fluffy puffy research that fuels tabloid headlines, not the research that shows, for example, differential patterns of memory strategies over child development, or the contributions of context to recognition memory, the differences between recollection and familiarity processes supporting recognition memory..you know, all the stuff that is not flashy for tabloids, but is real psychological science. Dr. Charan Ranganath was a member of my dissertation committee who recently wrote a wonderful book about memory and gave some really fantastic interviews. For example, on Fresh Air: https://www.npr.org/transcripts/1233900923 Now yes, some of this is informed by neuroimaging and neuroscience spanning human and animal models, but also lots and lots of behavioral memory research. And the findings that are discussed are pretty reliable, shown over and over again in different ways. So, no, this article is not great. It did not do diligent research. Its a rant that focuses on specific types of research that is a small majority of the REAL field.

> ... all the stuff that is not flashy for tabloids, but is real psychological science.

Real psychological science would produce falsifiable theories -- theories that in principle would be discarded after a conclusive failure in impartial empirical tests. Instead, landmark psychological theories that are discarded, result instead from public outcry, not falsification. Examples include Drapetomania, prefrontal lobotomy, recovered memory therapy, Asperger syndrome.

Trained therapists do no better than properly motivated laypeople. This is not meant to disparage either group, some of whom are very effective, but no one knows why a particular person becomes an effective therapist.

On leaving his position as NIMH director, psychiatrist Thomas Insel said, “I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs—I think $20 billion—I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness.” (https://www.psychologytoday.com/us/blog/theory-of-knowledge/...)

All this will be swept away by a future neuroscience that will shape testable, falsifiable theories about human behavior. Today's psychological alchemy will be replaced by tomorrow's neuroscience chemistry. But as the above Insel quote shows, we're nowhere near that goal.


Why do you, just as the article, jump from one topic to another, acting like it is a logical progression and not a rant?

Drapetomania (1851), prefrontal lobotomy, recovered memory therapy, what the hell are you talking about? These are not scientific theories from the modern era.

Plenty of hypotheses have been left in the dust because they failed at explaining aspects of a phenomenon. Working memory research has some great, easy to understand progression of theories in the 70's and 80's. Your quote from the NIMH director jumps to a new topic, and is the expression of regret that more didn't get achieved in a very hard field, and the relationship between genes and mental health is not straightforward at all, and just like cancer research, it turned out the problems were much much harder than once thought.

Your response is as bad as the article: a rant that ignores the good work, hangs it's hat on the fringe or flashy work. Try again. Show your work.


> Working memory research has some great, easy to understand progression of theories in the 70's and 80's.

Those are descriptions, not explanations. Science requires testable, falsifiable explanations -- theories, not anecdotes.

> Drapetomania (1851), prefrontal lobotomy, recovered memory therapy, what the hell are you talking about? These are not scientific theories from the modern era.

Yes, that's true. It's true because there are no scientific theories in psychology, past or present. Plenty of narratives, descriptions, but no explanations.

Psychology can describe behavior. Neuroscience will eventually explain behavior.

> Your response is as bad as the article ...

I suggest that you address the topic, not the participants.


And enough with the strict Popperism

> And enough with the strict Popperism

Enough with relying on science's universally accepted definition.


Silly response. Popperism is not a universally accepted definition of science. Not even by scientists themselves.

> Popperism is not a universally accepted definition of science. Not even by scientists themselves.

Well, false, but you would already know this if you had scientific training. It's discouraging to see so many young people trying to dismantle the Enlightenment without a full awareness of its origins and rationale.

Calling the foundation of science "Popperism" is like calling Democracy "Athensism," as though it's a temporary fashion or fad, open to replacement by something easier to negotiate.

Scientists sometimes grant a field a temporary reprieve to allow it to evolve -- string theory comes to mind -- but no one with scientific training dismisses the critical role played by falsifiability.

In Carl Sagan's "Baloney Detection Kit" (https://centerforinquiry.org/learning-resources/carl-sagans-...), we find this: "Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much."

Guess how many scientists risk their professional standing by arguing against this self-evident principle?


That's bullshit. It's like if you tried to say classical physics and chemistry were not science because they provided a description e.g. given heat, on average particles increase in velocity, on average exerting more force/pressure, which on average, is along this formula. Bullshit.

> That's bullshit.

Quite the argument. But you see, science has been defined, and psychology doesn't meet the definition: https://www.britannica.com/topic/criterion-of-falsifiability .

Quote: "criterion of falsifiability, in the philosophy of science, a standard of evaluation of putatively scientific theories, according to which a theory is genuinely scientific only if it is possible in principle to establish that it is false." ... "According to Popper, some disciplines that have claimed scientific validity—e.g., astrology, metaphysics, Marxism, and psychoanalysis —are not empirical sciences, because their subject matter cannot be falsified in this manner."

* Psychology studies the mind.

* The mind is not part of nature.

* Science requires empirical evidence and empirical falsifiability, "empirical" meaning derived from nature.

* Q.E.D.

> Bullshit.

I believe you said that already, again without a supporting argument.


The mind is not part of nature? That is quite the dualist, non-scientific, claim.

I don't know what kind of mind you refer, but in today's psychology, there is no dualism, or claim that predictable psychological phemenom aren't based in interactions between matter. Just like early classical physics using idealized construct created theories to explain and predict changes in pressure and temperature, so do psychologists creat idealized models to predict changes in behavior.

An analogy, two balls on a pool table collide. You idealized the balls, make observations, then build a model on how the variables relate(speed, mass, etc). You form a hypothesis about "laws" that govern behavior, then apply it to a similar system and test if it generalizes. But those laws aren't reality, it's a model of reality. This is the process of science, and it's exactly what psychology research does. We try to form a hypothesis that describes a set of behaviors, often with mathematical models (in cognitive psych), then apply it to another set balls/people to see if it generalizes.

As to Popper, there is no singular definition of science, and Britannica is hardly an authority on this complex matter.

If I hypothesize that there is an age-related increase in false alarms to a specific type of memory cue, and I repeatedly see the opposite age related trajectory, then I have falsified the hypothesis, either requiring greater specialty (e.g. verbal probes but not visual probes) or dropped altogether for an alternative, more encompassing theory.

It is really difficult for me to comprehend your point of view as anything but one primarily driven by personal biases.


> The mind is not part of nature?

Nope. It's a theoretical construct with no empirically measurable properties. https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem : "The mind–body problem is a philosophical problem concerning the relationship between thought and consciousness in the human mind and body."

This is a problem because there's no obvious connection between the mind and everyday reality.

> As to Popper, there is no singular definition of science ...

This is not true. Falsifiability and empirical evidence are part of the universally accepted definition of science. This is why mathematics, as important as it is to science, is not itself accepted as a science -- like psychology, it doesn't address empirical reality.

Carl Sagan, quoted from "The baloney detection kit" (https://www.themarginalian.org/2014/01/03/baloney-detection-...): "Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable, are not worth much. Consider the grand idea that our Universe and everything in it is just an elementary particle—an electron, say—in a much bigger Cosmos. But if we can never acquire information from outside our Universe, is not the idea incapable of disproof? You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result."

> It is really difficult for me to comprehend your point of view as anything but one primarily driven by personal biases.

You do understand, don't you, that when you avoid the topic and digress to personal attacks, you acknowledge that you don't have a meaningful counterargument?

In the mid-1990s, during the repressed memory fad (https://en.wikipedia.org/wiki/Repressed_memory), about the time virgins began reporting imaginary rapes, the legal system realized they were being played and the falsely accused were released from prison. The reason? Psychology is not a science, consequently these bogus "repressed memories" weren't ever subjected to scientific standards of evidence.

You need to realize that psychology's scientific standing isn't just a philosophical tea party -- it has real-world consequences. The wrongly accused repressed memory victims were released because psychology is not a science. The earlier "refrigerator mother" fad fell apart because psychology is not a science. Pre-frontal lobotomy was outlawed because psychology is not a science.

More recently, Asperger syndrome was abandoned because anyone with some acting ability could get the diagnosis, and because Albert Einstein, Bill Gates and Isaac Newton were assigned the diagnosis it because the first popular fad diagnosis, attractive to young people. Consequently the diagnosis became an epidemic, after which psychologists abandoned it, explaining that it isn't "based in science".

If you decide to reply, try addressing the topic.


PART 1

>> The mind is not part of nature?

>Nope. It's a theoretical construct with no empirically measurable properties. https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem : "The mind–body problem is a philosophical problem concerning the relationship between thought and consciousness in the human mind and body." This is a problem because there's no obvious connection between the mind and everyday reality.

Now, I finally understand where you are coming from, but I believe it is misconceived. While the mind is indeed a complex construct, cognitive psychology does not treat it as separate from empirical reality. Modern cognitive science and psychology link the mind's activities to physical processes in the brain, measurable through tools like neuroimaging (fMRI, PET scans), electroencephalograms (EEGs), and other methods that correlate mental states with brain activity. These techniques allow researchers to observe brain regions involved in memory, perception, and decision-making, offering empirical support for the study of mental processes. While the "mind-body problem" is a philosophical issue, cognitive psychology and neuroscience work from a materialist perspective that views the mind as arising from brain activity, making it empirically approachable.

I agree that early psychology was not scientific. Psychological research has evolved significantly since René Descartes' dualism separated the mind and body as distinct entities. Yes, this view dominated early thought, yes, Fuck Freud. Freud was not a scientist. The beginning of empirical psychology actually started in the early 20th century with 'Behaviorism', led by figures like John B. Watson and B.F. Skinner, focusing strictly on observable behaviors and rejecting the study of the mind, rooted in the desire to make psychology as rigorous and objective as the natural sciences. Behaviorists argued that psychology should limit itself to measurable, external actions, drawing on the idea that only behaviors that can be observed and quantified objectively should and can be studied scientifically. They argued that the mind could not be directly observed, and thus would lead to subjective and unscientific conclusions.

For Watson and Skinner, all behavior was the result of conditioning by the environment, either treating the mind as a black box that transforms inputs and outputs, or by denying a mind at all. But by the mid-20th century the limitations of behaviorism became extremely clear. For example, although Skinner tried to explain how children learn language through reinforcement, famously Chomsky showed that theory to the door [1]. Other fields also began to highlight the inadequacies of behaviorism. The rise of computers in the 1950s and 1960s offered a powerful analogy for human cognition. Scientists began to view the mind as an information-processing system, much like a computer, capable of storing, retrieving, and manipulating information. This shift led to the emergence of cognitive psychology, which treated the mind as a complex system with its own internal rules and processes, much of which could be scientifically studied through indirect methods like reaction times, error rates, and neuroimaging. For instance, studies in memory and perception revealed that people often reconstruct memories or interpret stimuli based on prior knowledge, something behaviorism could not explain since it denied the role of internal representations. Furthermore, cognitive psychologists like George Miller and Ulric Neisser demonstrated that mental processes could be objectively studied. Miller's work on the capacity of short-term memory (his famous "7±2" paper) and Neisser's Cognitive Psychology (1967), which consolidated the field, showed that cognition involved quantifiable processes like attention, memory, and problem-solving. Another example: While most cognitive psychology of memory (for example) based theorizing on objective measures of memory (hits misses, false alarms and correct rejections under different experimental conditions, others began probing subjective reports of their memory experience along with responding yes or no when asked if they seen a stimuli before in a memory experiment, asking questions like, "How sure are you on a Likert scale", or "how visual was the memory?". These kinds of questions rely on subjective reports, but contained information with empirical external validity, like objective memory accuracy, objective increases in BOLD response within regions which from lesion studies in rats and non-human primates (and in humans with damage to those regions. Indeed the brain of infamous amnesic HM was in the freezer behind my lab a decade ago.

>> As to Popper, there is no singular definition of science ... >This is not true. Falsifiability and empirical evidence are part of the universally accepted definition of science. This is why mathematics, as important as it is to science, is not itself accepted as a science -- like psychology, it doesn't address empirical reality. >Carl Sagan, quoted from "The baloney detection kit" (https://www.themarginalian.org/2014/01/03/baloney-detection-...): "Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable, are not worth much. Consider the grand idea that our Universe and everything in it is just an elementary particle—an electron, say—in a much bigger Cosmos. But if we can never acquire information from outside our Universe, is not the idea incapable of disproof? You must be able to check assertions out. Inveterate skeptics must be given the chance to follow your reasoning, to duplicate your experiments and see if they get the same result."

Popper’s criterion of falsifiability has been a cornerstone of scientific philosophy and it is important because it helps distinguish science from pseudoscience. Popper's take down of Freud makes him legend in my book. As I said earlier, Fuck Freud. However, it is important to also recognize the falsifiability is not always straightforward. Complex systems like those in biology do not easily lend themselves to clean falsification, theories are often probabilistic and deal with multifactorial causes rather than strict one to one cause-and-effect relationships. Consider a theory about how stress affects memory: testing such a theory might involve controlled experiments, but it’s often difficult to fully falsify because human and animal behavior is influenced by many variables. Yet, the predictive accuracy of models can be assessed through statistical analysis, allowing for refinement and testing of these theories without outright falsification in a Popperian sense. Moreover, science is also about confirmation within paradigms [2], and Popper's notion of falsification is often idealized beyond what is actually achievable. Broader Definitions of Science today includes concepts like cumulative evidence and refinement, explanatory power and predictive utility.

Mathematics is indeed abstract, often dealing with logical structures rather than empirical data, but psychology, especially in its modern form, does address empirical reality. Unlike mathematics, psychology relies heavily on empirical evidence to validate or falsify its hypotheses. Psychology uses behavioral proxies and neurobiological measures to ground mental processes in the physical world.


Some valid empirical questions are not testable in the Popperian sense. Theories of evolution are not, for example, at least in any useful way. Instead, a model based conception of science is required.

> Some valid empirical questions are not testable in the Popperian sense.

If by "valid" you mean "scientific", no, not true. As long as such questions cannot be tested and potentially falsified, they aren't science.

> Instead, a model based conception of science is required.

Einstein had a model for General Relativity in 1915, but the scientific world reserved judgment until it could be tested and potentially falsified. In 1919 an opportunity for a falsifiable test appeared -- an eclipse of the sun that would show the effect of space-time curvature and either validate or falsify Einstein's theory. (https://eclipse2017.nasa.gov/testing-general-relativity)

Einstein's model was interesting, but until a falsifiable test could be carried out, it was philosophical speculation. This is how science works.

I've been having this same conversation for decades -- psychologists want the status of science without the discipline of science. But that would require science to be redefined, which would dismantle the Enlightenment. Not happening.


You keep repeating the same things over and over, but it doesn't make it true.

Read this book.

https://press.princeton.edu/books/paperback/9780691000466/th...


> You keep repeating the same things over and over, but it doesn't make it true.

A worthwhile argument must have some depth. This fails the test.

> Read this book.

This is not how online fora work. If you want to make an argument ... make the argument.


part 2

>"Psychology is not a science, as demonstrated by examples like repressed memory."

The "repressed memory" was an embarrassing controversy, but does not negate the scientific validity of psychology as a whole. Firstly, its acceptance among psychological researchers was quite limited compared to its popularity among certain therapists and clinicians, and among "expert witnesses" in legal contexts. You attribute the dropping of repressed memory theory to external forces, but internal to the field of psychology, it was never a dominant paradigm, which indeed did not hold up to empirical scrutiny. Memory researchers like Elizabeth Loftus, a leading figure in the study of human memory, argued that memory is not a perfect recording of past events and that memories are highly malleable, subject to suggestion, and reconstruction over time. Loftus and others conducted research showing how false memories could be created, particularly through suggestive therapy techniques. For example, in controlled experiments, Loftus demonstrated that people could be made to "remember" events that never occurred, simply through suggestive questioning. Loftus's work was phenomenal[3]. Repressed memory was popular among Freudian psychoanalysts, which dominated the therapy field...but that's like blaming chemists for alchemists spreading bullshit. Again, Fuck Freud.

>"Psychology isn't a science, as shown by fads like 'refrigerator mothers' and prefrontal lobotomy."

There has been a lot that has been done wrong. These particular theories and practices were not empirically based and did not stand up to scientific scrutiny. In general the clinical psychology/psychiatry has lagged most in terms of scientific rigor, in part because they spend so much time working as a clinician and personally, I think they need to feel like they are helping their patients, and thus are prone to bias. However the autism research field is increasingly sophisticated employing neuroscientific and psychological methods. Moreover, and importantly, it is rapidly incorporating concepts of neurodiversity to temper purely medical-oriented ideology that tends to pathologize everything about autism.

>"Asperger's syndrome was abandoned because it became a popular fad, not based in science."

Look. I am a developmental psychologist at a major University who conducts autism research. I am telling you now: Asperger's syndrome was not abandoned because it was a "fad." Instead, it was reclassified under the umbrella of autism spectrum disorder (ASD) in the DSM-5 to better reflect the continuum of autism-related symptoms. The decision to merge these diagnoses came after extensive scientific debate and empirical research, which demonstrated that Asperger's syndrome and other autism-related diagnoses share overlapping characteristics. This reflects the refinement of psychological diagnostic criteria based on ongoing research, rather than an outright abandonment due to a lack of scientific basis.

>"Psychology has real-world consequences because it is not a science."

Psychology indeed has real-world consequences, as do all sciences.

[1] https://chomsky.info/1967____/ [2] https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re... [3] https://en.wikipedia.org/wiki/Elizabeth_Loftus


I must add this second reply:

> The "repressed memory" was an embarrassing controversy

It was not an "embarrassing controversy". Innocent people were thrown in jail based on the imaginary claims of witnesses -- then jurors, then judges -- who wrongly thought psychology -- and repressed memory therapy -- have the status of science. They do not.

After any number of cases, for example involving virgins reporting imaginary rapes, the legal system finally realized they were being played and the innocent were freed.

The problem was that people still granted psychology the status of science, as late as the mid-1990s, including the legal system. Not any more.

(https://en.wikipedia.org/wiki/Repressed_memory) : "Repressed memory is a controversial, and largely scientifically discredited, psychiatric phenomenon which involves an inability to recall autobiographical information, usually of a traumatic or stressful nature." [ ... ] "Subsequent accusations based on such "recovered memories" led to substantial harm of individuals implicated as perpetrators, sometimes resulting in false convictions and years of incarceration."

So, according to you, these were actually years of "embarrassing" incarceration of innocents. Suit yourself.

> Look. I am a developmental psychologist at a major University ...

Great -- an appeal to authority. Were you never taught this is a logical fallacy? (https://en.wikipedia.org/wiki/Argument_from_authority) : "The argument from authority is a logical fallacy ..."

> Asperger's syndrome was not abandoned because it was a "fad."

That is exactly what happened. In a nutshell:

     * Hans Asperger identified it in 1944.
     * Psychologists later identified Isaac Newton, Thomas Jefferson, Albert Einstein and Bill Gates (among others) as suffering from it.
     * This roster of famous "sufferers" made the diagnosis popular among young people (sometimes also their parents), many of whom sought the diagnosis for themselves.
     * Overdiagnosis resulted in what is now described as an epidemic (https://time.com/archive/6641066/the-end-of-an-epidemic/) of Asperger's diagnoses involving people with a modicum of acting ability and a desire to have the same mental illness as Albert Einstein and Bill Gates.
     * In response, psychologists folded the diagnosis into a larger category with a much less desirable name, with the very desirable effect of dramatically reducing the rate of diagnosis. This happened due to public perceptions -- not science, not clinical presentation, but public perceptions.
>>"Psychology has real-world consequences because it is not a science."

> Psychology indeed has real-world consequences, as do all sciences.

With one critical distinction -- psychology is not a science. This is true because it lacks a foundation in testable, falsifiable theories. Astrology has theories, the theories fail any reasonable test, so Astrology is a failed science. Psychology has no such theories, so it can't be undermined by falsifiable tests of its claims.


> Now, I finally understand where you are coming from, but I believe it is misconceived. While the mind is indeed a complex construct, cognitive psychology does not treat it as separate from empirical reality.

It doesn't matter what psychologists believe, it is all about what can be proven scientifically.

The reason for the central role of the mind-body problem in philosophy is because scientists and thinkers know them to be distinct -- the mind and the body lie in separate, non-overlapping domains.

The mind is not a physical organ, it is a philosophical construct, therefore it cannot be studied scientifically. Were this not true, there would be no "mind-body problem." But there is, and it is central to psychology's problems. https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem : "It is not obvious how the concept of the mind and the concept of the body relate." That's true, and this issue would need to be conclusively resolved to turn psychology into a science.

--------------------------------------------------

Every scientifically trained person, from Freud to the present, who has studied human psychology, has reluctantly come to the conclusion that psychology is not a science.

In his book "Entwurf einer Psychologie" (1895), Freud said, “Why I cannot fit it together [the organic and the psychological] I have not even begun to fathom.” Knowing that this book would ruin his relations with therapists, Freud ordered that the book not be published during his life.

The published views of many other scientists are available to you if you were curious, all of whom come to the same conclusion.

Under contract to the APA, Sigmund Koch created a six-volume tome (1963) meant to evaluate psychology's scientific standing. Koch concluded, "The hope of a psychological science became indistinguishable from the fact of psychological science. The entire subsequent history of psychology can be seen as a ritualistic endeavor to emulate the forms of science in order to sustain the delusion that it already is a science. The truth is that psychological statements which describe human behavior or which report results from tested research can be scientific. However, when there is a move from describing human behavior to explaining it there is also a move from science to opinion."

In case that quote was lost to you, Koch is saying that psychological measurements follow scientific standards, until it's time to craft a theory, then things fall apart. This is why so many psychologists think psychology is a science -- its has a superficial similarity to science, until it's time to try to explain, to craft a theory.

In a now-famous lecture (1974), Nobel Prizewinner Richard P. Feynman said, "I think the educational and psychological studies I mentioned are examples of what I would like to call Cargo Cult Science. In the South Seas there is a Cargo Cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he’s the controller—and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. *So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.*"

Feynman's point is that the appearance of science isn't enough, there must be testable, falsifiable theories, but that is not possible when the thing being studied is not part of nature.

Former APA president Ronald F. Levant (2005) began a campaign to move psychologists toward evidence-based practice, saying, "Some APA members have asked me why I have chosen to sponsor an APA Presidential Initiative on Evidence-Based Practice (EBP) in Psychology, expressing fears that the results might be used against psychologists by managed-care companies and malpractice lawyers." His proposal fell flat on the ground that psychology couldn't possibly adopt EBP -- no scientific evidence, because no science.

Theodore Insel, director of the NIMH for 13 years, regularly exhorted psychologists to adopt science-based standards, finally giving up and resigning in 2015. Insel later wrote an article for Psychology Today in which he explained how 20 billion dollars of science funds were wasted, because ... wait for it ... psychology is not a science. (https://www.psychologytoday.com/us/blog/sacramento-street-ps...)

All this information -- and much more -- would be available to you if you were willing to critically test your own views ... like a scientist.


You can't even prove the mind exists. So maybe it doesn't, and the foundation for all of your dualism collapses in on itself.

All you really have is the emergent behavior of all that brain matter, and the fact that such things can be given description at various levels of abstraction.

Popperian science is completely unequipped to deal with the brain, or complex, evolving, particularistic systems generally, including evolution. You need a model based science instead. You are completely incorrect that Popperian science is the universally accepted definition of science. It's not true in the philosophy of science, and its not even true among scientists. It's merely a first pass description of a broad mechanism of knowledge generation that is used in many fields of science.

https://plato.stanford.edu/entries/structure-scientific-theo... https://plato.stanford.edu/entries/models-science/


> You are completely incorrect that Popperian science is the universally accepted definition of science.

Please do some research on this topic -- falsifiability is an essential cornerstone of modern science. Required are testability, empirical evidence, falsifiability -- and falsifiability is the most important.

From Carl Sagan's "Baloney Detection Kit" (https://www.themarginalian.org/2014/01/03/baloney-detection-...) : "Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much."

-- Thousands of similar references from the world of science --

This is not a philosophical tea party -- there are the rules of science.


"Please do some research on this topic"

I have. I've read quite a bit of philosophy of science. I'm also a scientist.

I don't think you have. Start here: https://plato.stanford.edu/search/searcher.py?query=science


You have not ever specified how a model of memory, for example, cannot be falsified. You just state that it cannot be falsified.

If you are going to suggest "research" to me then provide research grade materials, not pop-sci. Your arguments constantly appeal to authority by either science fiction authors, pop science writers, or to a particular scientists making a declaration (e.g. a physicist) who I doubt know the least thing about modern psychological research. Its weak evidence, and its not in good faith.

You say neuroscience does not rely on the mind-problem, but your claim that psychological research does (and cannot escape it) is based on arguments like "its self-evident that the mind exists, therefore", without ever finishing that thought, constantly presuming your conclusion, constantly ignoring my arguments about modern psychological research.

I will try one more time.

The dualist mind body is irrelevant to psychology. Our theories are about behavior. Measurable behavior. We construct hypotheses about how those behaviors might arise via the body and brain, test whether it is a valid model, how much explanatory power it has, and in what situation it fails to explain behavioral data. We then revise our models.

We use measures more than surveys, we use neuroimaging, physiological measures, we record neuron spikes, to build out our understanding of the how cognition occurs. Moreover, there is no real division between neuroscience and psychology today. Psychologists work with individuals who work with neurons on a plate, with rodents, with molecular biologists. You may say that only individuals who work on cells are scientists, but that's bullshit because, for example, when you put a couple hundreds of neurons together in a network, they dynamics become incredibly complicated and emergent, and network level descriptions of the activity become important in understanding how each individual part works. But the whole is more than the sum of its parts. No really. It is. Its been shown over and over using information theory in synthetic networks. And, in any case, cells are incredibly complex, and so their behaviors get described with heuristics, with probabilities.

Moreover, you must consider the work at multiple levels of description together before you make a judgement about whether its science. Science is not what one lab does, but how the whole endeavor works. Psychologists work at a course level of description, but their work has repeatedly informed the work of scientists working at a lower level. There is literally a ton of two-way information flow between those working at a very low level and those working at a high level.

We have a concept that we scientists use in this field: converging evidence. Converging evidence is not one study, but whole bodies of work from across multiple levels of analysis. You may not think that purely behavioral psychologists would be part of this endeavor, but they really really are a huge part of neuroscience progress. We are not separate. Psychologists are neuroscientists, helping knowledge converge on understanding the brain.

You can think of it as forest for trees analogy. Or in gradient descent, how sometimes considering a larger breadth, or lower resolution, helps avoid getting stuck in a local minima. Sometimes the wide perspective helps you make sense of what you are seeing in local data. So please spend some time thinking about how psychology works, not as a separate field but as an integral part of a larger field that together is moving forward on understanding the brain.

Your current view of my field is archaic, confused, and frankly incredibly naive.


> You have not ever specified how a model of memory, for example, cannot be falsified.

Wait ... did I read that right? It's up to psychology's critics to identify unfalsifiable claims, as well as face the classic impossibility of a negative proof, which BTW is a classic logical error? I imagine that psychologists would want to use positive evidence to shore up the foundations of their own field, by for example demonstrating the connection between a memory model and its biological foundation.

On that topic, the recent drosophila study (https://www.smithsonianmag.com/smart-news/scientists-unveil-...), in which this tiny creature's entire brain was mapped in detail, is likely to be at least as revolutionary as the researchers claim, for the reason that nothing is left out. No guesswork -- memory, function, sensory connections -- simple, yes, but complete.

It's noteworthy that this work relies entirely on biology, with no role for the idea of a mind. Eventually this approach will see psychology wither away, as did alchemy, once more scientific approaches became possible.

In fact, now that I think about it, this neuroscience disregard for drosophila's mind ought to inspire criticism from psychologists on the ground that, according to psychology, the mind is an essential component of any valid study of brain function.

> The dualist mind body is irrelevant to psychology.

Of course it is. Because if this were not so, the field would collapse. The connection between mind and body is an article of faith among psychologists -- faith, not evidence.

> Your current view of my field is archaic, confused, and frankly incredibly naive.

That's quite the argument. Medieval and heartfelt.

Now I have a question. Given the drosophila study -- a complete survey of a small creature's brain in which the function of all the elements is known -- how many years will be required for the nervous system of a larger creature, and eventually a human being, to be mapped and characterized in such a way that a falsifiable, biological basis for behavior is demonstrated, one that does away with the very idea of a mind as a temporary and unnecessary crutch?

Given that inevitability, what will happen to psychology?

I also wonder about this, a question having nothing really to do with our discussion -- will we fully map the human nervous system as to form and function, using increasing amounts of computer power, or will AI take over society beforehand, also relying on increased computer power? Which will happen first? Will we exhibit the wisdom required to curb AI, prevent it from overwhelming our lame biological processors?

This last really is an open question, unlike the abandonment of psychology, which seems a foregone conclusion.


In any case, whether there is such a thing as emergence and downward causation is a really interesting topic. You might enjoy this paper: https://www.researchgate.net/publication/373342399_Can_there...

Mutual Information can be decomposed into redundant and synergistic components, where here synergy means there is more information considering two parts together than just summing the information in each part. T


If we cannot agree that the mind is materialistic then there is no way forward for us, except to note that since you earlier stated that you think basic neuroscience has a chance, then the mind body problem is not that important after all.

> If we cannot agree that the mind is materialistic then there is no way forward for us, except to note that since you earlier stated that you think basic neuroscience has a chance, then the mind body problem is not that important after all.

On the contrary, aware of the importance of the mind-body problem, neuroscience disregards the concept of a mind, focusing instead on the brain and the nervous system. This doesn't address the mind-body problem, it ignores it as a pointless digression and a waste of time.

To the extent that neuroscience addresses the idea of a mind, it is as an obstacle to progress. When I first studied neuroscience, as a young student I would sometimes refer to the mind, at which point my professor would reply, "The what? Please explain." His goal was to address and dismiss the mind as soon and as conclusively as possible, so we could move on to more productive topics. You and I have exactly the same problem, for the same reason.

This is not to disparage the productive activities of psychological therapists -- I think I've made that clear in this conversation -- only to say it's not science.

Consider this example -- let's say I perform a study of astrology. I create a reliable survey quantifying the various astrological signs. My article accurately tells the reader how many Geminis and Tauruses there are in the population, with much interviewing and an impressive P-factor, sufficient to assure publication. It's a solid scientific result by any measure.

Now the question -- does my entirely valid, scientific, astrology survey make astrology itself science? The answer is no, because my astrology result doesn't test or potentially falsify astrology's foundational theories.

Astrologers will insist that this valid, fully scientific astrology study means astrology is itself scientific -- never mind that it doesn't test astrology's foundational theories and claims. But this is obviously false -- only successful tests of those foundational theories could raise astrology to the status of science.

Psychology has the same problem as astrology, with the important difference that, unlike astrology, psychology doesn't have testable, falsifiable foundational theories. There are plenty of valid, scientific psychology studies with impressive P-factors ... but they do not, and cannot, address testable, falsifiable foundational psychology theories, because the latter do not exist.

Its easy to show that astrology's basic claims -- that our lives are ruled by the positions of stars and planets -- fail any objective tests, and therefore astrology is pseudoscience. But this is not possible for psychology, only because psychologists know better than to make testable, falsifiable claims about how and why the mind affects the body.

There are any number of studies that show a mind stimulus and a body response -- reliable and repeatable -- but no explanation for the connection between the two. That would require falsifiable, empirical psychological theories that explain how the mind affects the body, and more important, why. Indeed, a psychologist who offered such a theory would be expelled from the profession.

This is why neuroscience is the way forward.


* Psychology studies the mind.

Wrong.

* The mind is not part of nature.

Asserted without evidence. But, frankly, you can point to "the mind" like you can point to "the soul" or "the spirit". You can't.

* Science requires empirical evidence and empirical falsifiability, "empirical" meaning derived from nature

Yes, and no. Sometimes falsifiability is too blunt an instrument. Why? Because complex, aggregate phenomena are multi-causal, and often historically particular, and there is no practical means to collect enough data, or design experiments, that can untangle all the causal threads. Remember that the phenomenon being explained is already at a higher level of abstraction over a series of objects and events that have a family resemblance. I can tell you (because I've looked at a lot of them) that no two human brains are the same.

* Q.E.D.

QED belongs in the realm of logic, which is best applicable to abstract objects. Mathematical ones, you might say. There is a reason that classical, logic based, approaches to reasoning about the real world (e.g. in robotics failed). (:


>> Psychology studies the mind.

> Wrong.

Wikipedia: Psychology (https://en.wikipedia.org/wiki/Psychology): "Psychology is the scientific study of mind and behavior."

Do avoid pointless contradictions. Psychology's goal is to scientifically study the mind -- whether they can actually do that remains an open question.

>> * The mind is not part of nature.

> Asserted without evidence.

That would require proof of a negative, the most common tactic of a pseudoscientist ("You cannot prove Bigfoot false? All right, then -- he exists.") This means the positive burden of evidence for the thesis that the mind is part of nature belongs to psychologists -- it is, after all, their claim.

> Sometimes falsifiability is too blunt an instrument.

That's right, and assertions that cannot be falsified, cannot become part of science. This is one reason string theory is in limbo -- no falsifiable experimental validation. This is actually a bad example in modern times, because string theory has pretty much been discarded for multiple reasons, its untestability being just one.

>> * Q.E.D.

> QED belongs in the realm of logic, which is best applicable to abstract objects.

Wait ... it's your argument that saying "which was to be demonstrated" has a strict domain of applicability?


I think neuroscience will not replace psychology for the same reasons physics doesn't replace chemistry. In theory it might encompass it, but in practice it's a difficult way to get there, and therefore not the effective path.

> ... for the same reasons physics doesn't replace chemistry.

It's true that physics didn't replace chemistry -- instead it explained it. Physics gave chemistry a theoretical foundation. In the same way, neuroscience will explain psychology. But not any time soon.


> neuroscience will replace psychology

That seems like say that hardware engineers should be the ones debugging software.


>> neuroscience will replace psychology

> That seems like say that hardware engineers should be the ones debugging software.

Perhaps at first glance, but neuroscience will eventually deal with "software" issues in much the same way that tested, reliable computer programs are committed to ROM chips as a safeguard against inadvertent erasure.

In the future biological case, it might become possible to modify human behavior by "reprogramming" neuronal patterns semi-permanently. This might sound like a panacea at first glance, but it could have a truly scary "big brother" dimension if it's used to control people's behavior in a way meant to enforce social conformity.

But one thing for sure -- it will be more effective than talk therapy. :)


SELECT * FROM NSA.MetaConversationsOfAllMankind WHERE (psychology_hypothesis_applies(meta)) AND ..

We know, but the know is bad news..


I've learned from psychology that meditation and exercise is helpful. So there's that.

That's very superficial imo. For me it is remembering that I am unique and being in tune with the inner voice which is really mine.

Maybe some people grow into that mindset naturally but it wasn't the case for me (even if there were no particular reason for the societal pressure to be super strong).


This was a great write up.

One thing that struck me as to the difficulty / young-ness of this field is also the fact that it is the only science that is us studying our thinking selves. It's almost like trying to draw a picture of the exact spot you are standing on.


i think that that issue leads to particularly intense 'folk psychology' because everybody has experience with a mind which will lead to everybody having their own litle set of folk psychology beliefs that the scientists need to overturn.

contrast that to something like... geology. i'm sure there was folk geology back in the day about where different types of rocks come from, but for 99% of people they weren't particularly invested in it. so when the scientists came out with empirical research, most people would be fine with it.

but when psychologist confront folk psychology, people often take it as a personal affront. any comment section about psychology research (HN included) is filled with armchair experts contradicting researchers with their own pet theories


This was awesome. Don't just read the comments. Spoiler: psychology is not stupid, it's just young.

I completely agree. This is how I see psychiatry after having experienced it for decades: it's just slightly better than going to a shaman. It's witchcraft and it mostly doesn't work because, well, it's witchcraft. We just are not at a point in history where we can do much about these things and we have to be adults and accept that. It's okay, there was a time when we'd die of simple infections too. That's how psychology is now, very young and full of witchcraft.

Also, the article was funny... And wtf is that cheeseboat?


I had the same question. It's Cincinnati chili. Wikipedia has an article [1] and Serious Eats goes into way more detail [2]. It actually sounds pretty good. I'll give it a go but with much less cheese.

[1]: https://en.wikipedia.org/wiki/Cincinnati_chili

[2]: https://www.seriouseats.com/cincinnati-chili-recipe-8402230


I think this is very fair. Every science that was starting out was indistinguishable from nonsense when starting out. But to say we shouldn't have pursued alchemy, because we didn't understand chemistry? One likely led to the other.

I think one of the great ironies is that psychology is one of the hardest sciences but is treated so soft. I say this holding a degree in physics! (undergrad physics, grad CS/ML)

By this I mean that to make confident predictions, you need some serious statistics, but psych is one of the least math heavy sciences (thankfully they recently learned about Bayes and there's a revolution going on). Unlike physics or chemistry, you have so little control over your experiments.

There's also the problem of measurements. We stress in experimental physics that you can only measure things by proxy. This is like you measure distance by using a ruler, and you're not really measuring "a meter" but the ruler's approximation of a meter. This is why we care so much about calibration and uncertainty, making multiple measurements with different measuring devices (gets stats on that class of device) and from different measuring techniques (e.g. ruler, laser range finder, etc). But psych? What the fuck does it even mean "to measure attention"?! It's hard enough dealing with the fact that "a meter" is "a construct" but in psych your concepts are much less well defined (i.e. higher uncertainty). And then everything is just empirical?! No causal system even (barely) attempted?! (In case you've ever wondered, this is a glimpse of why physicists struggle in ML. Not because the work, but accepting the results. See also Dyson and von Neumann's Elephant)

I've jokingly likened psych to alchemy, meaning proto-chemistry -- chemistry prior to the atomic model (chemistry is "the study of electrons") -- or to astrology (astronomy pre-Kepler, not astrology we see today). I do think that's where the field is at, because there is no fundamental laws. That doesn't mean it isn't useful. Copernicus, Brahe, Galileo (same time as Kepler; they fought), and many others did amazing work and are essential figures to astronomy and astrophysics today. But psych is in an interesting boat. There are many tools at their disposal that could really help them make major strides towards determining these "laws". But it'll take a serious revolution and some major push to have some extremely tough math chops to get there. It likely won't come from ML (who suffers similar issues of rigor), but maybe from neuroscience or plain old stats (econ surprisingly contributes, more to sociology though). My worry is that the slop has too much momentum and that criticism will be dismissed because it is viewed as saying that the researchers are lazy, dumb, or incompetent rather than the monumental difficulties that are natural to the field (though both may be true, and one can cause the other). But I do hope to see it. Especially as someone in ML. We can really see the need to pin down these concepts such as cognition, consciousness, intelligence, reasoning, emotions, desire, thinking, will, and so on. These are not remotely easy problems to solve. But it is easy to convince yourself that you do understand, as long as you stop asking why after a certain point.

And I do hope these conversations continue. Light is the best disinfectant. Science is about seeking truth, not answers. That often requires a lot of nuance, unfortunately. I know it will cause some to distrust science more, but I have the feeling they were already looking for reasons to.


As someone who did statistics and psychology, I'm very surprised by this take, for a few reasons:

1. Many of the early pioneers in statistics were psychologists.

2. The econ x psych connection is strong (eg econometrics and psychometrics share a lot in common and know of each other)

3. Many of the people I see with math chops trying to do psychology are bad at the philosophy side (eg what is a construct; how do constructs like intelligence get established)


As in many fields, the strength of statistical practices continually improve. And the parent comment has it right about the difficulty. In physics its much easier to ensure your sample is representative (heterogeneity is huge), and you have no way of ensuring that last sample of 100 participants have the same characteristics as your next sample of 100.

I'm sorry, maybe a didn't communicate clearly. SubiculumCode commented the main part of what I wanted to convey, so I won't repeat.

1. Yes! But that's doesn't exactly change things, in fact, it's part of my point. A big part of why this happened (and still does!) is due to the inherent difficulties and the lack of existing tools. If you ever get a chance, go look at a university physics lab. Even Columbia's nuclear reactors (fusion or fission!) and I think many will be surprised how "janky" it looks. It's because they build the tools along the way, not because lack of monetary resources (well... That too...) but because the tools don't exist!

My critique about the psych field is that this is not more embraced. You have to embrace the fuzziness! The uncertainty. But the field is dominated by people publishing studies that use very simple statistical models, low sample sizes, and put a lot of faith in unreliable metrics with arbitrary cutoffs (most well known being the p-value). Many people will graduate grad school without a strong background in statistics and calculus (it's also easier to think this is stronger than it is. And of course, there are also plenty who would be indistinguishable from mathematicians. But on average?). There are rockstars in every field, even when not recognized as rockstars. But it matters who the field follows.

And I must be absolutely clear, this is not to say that work and those results are useless. Utility and confidence are orthogonal. You might need 5 sigma confidence verified by multiple teams and replicated on different machines to declare discovery of a particle but before that there's many works published with only a few sigma and plenty of purely theoretical works. (Note: in physics replication is highly valued. Most work is not novel and it is easy to climb the academic ladder without leading novel research. This is a whole other conversation though) This is why I discussed Copernicus and Brahe but would not call them astronomers. That's not devaluing them, but rather nothing a categorical difference due to the paradigm shift Kepler caused. Mind you, chemistry even later!

2. I specifically mention economists (my partner is one). I could highlight them more but I feel this would only add to confusion. I believe those close to the details will have no doubt to their role. I don't want to detract from their contribution but I also don't want to convolute my message which is already difficult to accurately convey.

3. I think this is orthogonal. I'm happy to bash on other fields if that makes my comment feel less like an attack rather than a critique (or wakeup call). I'm highly critical of my own community (ML) and believe it is important that we all are *most* critical of our own tribes, because even if we don't dictate where the ship goes we're not far removed from those that do. I'll rant all day if you want (or check my comment (recent) history if you want to know if I'm honest about this). I'll happily critique physicists who can solve high order PDEs in their sleep but struggle with for loops. Or the "retired engineer" trope that every physicist knows and most have experienced.

But it is hard to be critical while not offending (there's a few comments about this too). Maybe you disagree with my critique but I hope you can reread the end of comment as hopeful and encouraging. I want psychology to be taken more seriously. But if the field is unable to recognize why the other fields are dismissive of them, then this won't happen. Sure, there's silly reasons that aren't reasonable, but that doesn't mean there's no reason.

It is a matter of fact that the (statistical) confidence of studies in psychology is much lower than those of the "hard" sciences (physics, chemistry, and yes, even biology (the last part is a joke. Read how you will)). In part this is due to the studies and researchers themselves, but the truth is that the biggest contributing factor is the nature of the topic. That is not an easy thing to deal with and I have a lot of sympathy for it. But how to handle it is up to the field.


Not all psych is as jurasic as you describe. For example cofnitive psychology has better theorios with more predictive power than personal psychology that is often picked at. Sure journals are flooded with underpowered studies and studies with very little links to theory, and there is still massive gaps in scientific knowledge but core consturcts are solid.

  > Not all psych is as jurasic as you describe.
Certainly. It's difficult to talk in general because there's always exceptions. I also think it's very easy to misunderstand my comment. I didn't think psych is useless. But science isn't so much about having the right answer as your confidence in your model of "the answer". It's not binary. My point is that when working in a field where it's very difficult to have high confidence to normalize it and become over confident in results because everyone else is working with similar levels. I have (and made) similar criticisms of my own field, ML.

  > core consturcts are solid.

  - which concepts?
  - how solid? 
  - how do you know they're solid?
You don't have to answer, this isn't a challenge. But these are questions every good scientist should be constantly asking about their own work and their own field. That's the "trust but verify" part. It's why every scientist should constantly challenge authority. Because replication is the foundation of science and you don't get that without the skepticism.

Lots of great points. I would start with semiotics, including during the problem definition phase, otherwise you could easily end up lost in language without the slightest clue of the predicament you're in.

Epistemology is also useful, because it might allow one to wonder if the problem space is non-deterministic (or not discoverable as).


Psychology is not inherently treated as soft, it's jusst that its human element attracts intuitive people much more than rational ones. If nore rationally minded people took up the study and research of psychology fields, more hard stuff would come to the front, although soft stuff is hardly behind in intelligence.

The parallels to ML you drew are on point. ML has this tendency to oversimplify complex phenomena with a easy to produce datasets, because that's what ML folks do, they find a smart and easy way to create a dataset and then they focus on the models. But this falls apart pretty quickly when you go into societal problems, such as hate speech or misinformation. Maybe there it would be nice to have some rigor and theory behind the dataset instead of just winging it. I am working on societal biases in NLP and I feel confident that majority of the datasets used have practically no validity.

I love ML. I came over from physics because it is so cool. But what I found odd is that few people were as interested in the math as I was, and even moreso were dismissive of the utility of the math (even when demonstrated!). It's gotten less mathy by the year.

My criticisms of ML aren't out of hate, but actually love. We've done great work and you need to be excited to do research -- and sometimes blind and sometimes going just on faith --, because it is grueling dealing with so much failure. Because it's so easy to mistake failure for success and success for failure. It's a worry that we'll be overtaken by conmen, that I have serious concerns that we are not moving towards building AGI. But it's difficult to criticize and receive criticism (many physics groups specifically train students to take it and deal with this seemingly harsh language. To separate your internal value from your idea). So I want to be clear that my criticism (which you can see in many of my posts) is not a call to stop ML or even slow it down, but a desire to sail in a different direction (or even allow others to sail in those directions as opposed to being on one big ship).

I do know there are others in psych with similar stories and beliefs. But there's a whole conversation about the structure of academia if we're going to discuss how to stop building mega ships and allow people to truly explore (there will always be a big ship, and there always should! But it should never prevent those from venturing out to explore the unknown. (Obviously I'm a fan of Antoine de Saint-Exupéry lol ))


A tangent but if you're frustrated with the dismissal of math in ML I think you would probably enjoy diving in the Reinforcement Learning subfield; although some people tend to call it experimental math ;]

ML applied to robotics/embodiment is another fun topic, where physics is very relevant in every step of it. A bit harder to dataset-hack when a physical system is crumbling in front of you.


Oh I've found some areas. I'm particularly fond of explicit density models and normalizing flows. I do like RL too and was very surprised to hear that the math was "confusing" when it just seemed like weird notation. But it's hard to get works through review because even if you get SOTA for your architecture some asshat will ask you why you aren't better than the architectures being worked on by tens of thousands of people and with compute budgets 1000x yours. Though it's better if you find a very small niche so the reviews you get are more likely to actually know the bare basics of the topic.

> I do think that's where the field is at, because there is no fundamental laws

I think that there are some fundamental laws, which are based on perceptions and their interplay. Speaking very briefly, there are five classes of perceptions: emotions, wishes, thoughts, beliefs, and body sensations. The division of perceptions into these classes is not a result of purely intellectual exercise or idle theorizing. If one starts carefully and diligently observe contents of their mind, these contents will delaminate into such classes naturally. Try that yourself and you will see it as a fact.

Further introspection and assessment of arising perceptions would reveal some interesting patterns: there are two mutually incompatible kinds of emotions, and two mutually incompatible kinds of wishes, and so on.

One could make observations about the interplay of these perceptions and their dynamic. For example, if someone in some specific situation experiences an emotion X and a wish A (with some specific qualities), they can either realize that wish or choose not to do so. Each choice will lead to some changes in the contents of the mind: emotion X is replaced by emotion Y, and/or wish A is changed into wish B, and so on. Gather enough observations of that kind, and you could eventually formulate some hypotheses about possible general laws of perceptions (e.g. make a prediction that emotion X will change to emotion Y in specific set of circumstances).

These hypotheses could be verified by training several people to observe the same five classes of perceptions in the same manner. Arrange various test events for them, record their choices and outcomes of these choices (described in the same language of five classes of perceptions).

If most of them report more or less the same subjective outcomes (without being told about hypothesis and predictions, of course), that's the first step of verification for a possible general law of perceptions.

The second step of verification would be to apply brain imaging to those trained people, allowing us to map emotions X,Y,Z to some distinct patterns of the brain activity. After that do the same experiments with people who are untrained: arrange same test events for them while recording their brain activity. If changes in their brain patterns match for emotion X changed to emotion Y, that would be an objective confirmation of hypothesis formulated earlier.


I'm afraid that after reading this guy, people will just give up, thinking there is nothing that works. And this is not the case at all, depression and many other problems are curable. Mine got cured, in addition to anxiety, anger management problem and suicidality. You can get help or start by reading a workbook yourself.

He links to a meta analysis* that says CBT does cure depression and does so consistently for many decades without any declines in effectiveness. Later for some reason, he says no single mental illness was ever cured.

It seems the main point of the article is to say that nothing except "nudges" ever worked in psychology - this is nonsense that he himself contradicts as I mentioned above.

Skip this sensationalist guy, use https://scholar.google.com to do your own research

* https://research.vu.nl/ws/portalfiles/portal/26037670/2017_C...


> "Another way of thinking about it: of the 298 mental disorders in the Diagnostic and Statistical Manual, zero have been cured. That’s because we don’t really know what mental illness is..."

I'm of the firm belief that today's understanding of psychological maladies is comparable to mid-19th century theories of the causes of disease - when doctors had little idea of the causality of infectious disease or cancer or heart disease (indeed they had no way of distinguishing between transmittable infectious diseases and other types of illness).

Take the importance of insect control and water treatment and condoms in preventing infectious disease from bubonic plague to cholera to HIV and syphilis - they just had no idea until Koch and Pasteur came along. It's probably safe to compare this to our current advertising system, which deliberately makes people feel miserable about various aspects of their appearance and social status with the goal of convincing them that buying some product or other will fix their lives - and it's especially damaging when developing children and teenagers are the targets.

The fact that capitalist consumer society norms are as much as source of mental illness in modern populations as the filthy open sewers of old European cities were of infectious disease is a concept I suspect today's corporatized academic institutions will have a hard time accepting.

A further issue is that currently illegal psychedelic drugs show more potential for understanding and treating a wide variety of mental illness conditions under controlled conditions than any of the widely prescribed antidepressants do, and yet most governments are rigidly opposed to their legalization.


Psychology is the study of the brain at its highest abstraction when we know very little about it at any level. If you believe in determinism then psychology is just voodoo bullshit. Everything should be able to eventually be explained by a pathology, physical processes. With each new scientific breakthrough psychology becomes more and more obsolete and irrelevant. How many counselors have already been replaced by a prescription to antidepressants?

I feel like that's saying weather forecasting is voodoo bullshit because it's all quantum mechanics deep down

What is a rain cloud?

Now what is consciousness?


I remember some Discovery show called psychology a pseudoscience and fraud.

Two factors are significant in the failure of psychology to deal with mental illness: cultural shift; political activism.

In the U.S., many used to believe in God, his design, Righteousness, love, justice, strong, families, building on biological principles, like gender roles, and institutions reflecting these principles. Most believed we would be judged at the end of our life for how well we did those things. They believe we would be rewarded or punished. Building on these things kept paying off in spite of all the problems they came from the simple nature of humanity.

Overtime, the culture shifted to be godless, subjective, individualist, money, focused, pleasure, focused, anti-family, go against biological design, and so on. Doing the opposite of God’s design, unraveled its advantages while leading all sorts of problems. Was subjectivism over objective truth, we also can’t agree on solutions.

we also lost the supernatural advantage. If we repent and follow Jesus Christ, he puts the spirit of God in us. God spirit gives us an inner peace that persists even in bad circumstances. We learned that our suffering is constructive, if not caused by bad choices. We also learned to turn away from send which prevents much suffering in individuals and in society. God also hears our prayers whereby he may supernaturally change the circumstances of individual lives or the world itself. We call these events luck.

Outside of Christian counseling, psychologists and society have given these things up. They’ve given up the power that Jesus Christ provides to overcome things we can’t ordinarily overcome.

I also mentioned political activism. When liberal Atheist took over education, they started promoting ideas that are politically important to them, which didn’t have any science backing them up. If there’s a conflict, their politics always take priority over science. They tied their unproven solutions into those general concepts where you can’t argue with many of their methods more than the concept itself. So, they pick people who will promote these views, they’re baked into their “science,” and nobody sees contrary evidence. If the theories do damage, or are ineffective, the damage continues because their application is driven by political domination rather than the scientific method.

Examples include evolution as the origin of life, specific theories of man-made global warming, feminism, pro-homosexuality, gender as a construct, subjectivism, intersectionality/C.R.T.) modern Marxism), and so on.

So, psychology is failing because it’s built on the wrong foundations, powerless compared to having Christ plus counseling, and set up to fail by political activists whose desires come before the truth or others’ needs.


Many people are pointing to the replication crisis as indicating that Psychology is not a science. However, Medicine is also facing a replication crisis to the same degree. Would people also suggest that Medicine is not a science?

I think it comes down to the fact that both disciplines work with people and behaviour, and people are not homogeneous nor are behaviours easily predicted. For that reason, I think that a distinction can be made between “hard sciences” and “soft sciences” - we just can’t get the same level of precision that the hard sciences can, but that doesn’t mean that Psychology isn’t scientific. We still apply the scientific method to discover phenomena and develop testable theories, just like other sciences. And meta-analyses allow for much greater certainty in findings..


It's possible that fields can fluctuate in how scientific they are over time.

Physics has in the past been a case study of how to successfully deploy the scientific method, but modern physics is often criticized from within for spending all its time on mathematical "theories" that can't actually be tested, something conventionally considered unscientific. Epidemiology was once built on basic scientific observation, it's now also disappeared down the drain-hole of endless mathematics without real world hypothesis testing.

Medicine in contrast spent a lot of time being unscientific in the past, and is now much more rigorous. The problems here are usually fraud done in aid of avoiding the scientific method without being detected, rather than the actual lack of a scientific method at all.

And some fields have got more scientific over time, or at least more quantitative. Economics and education are like this.


It isn't very scientific if a discipline persists in bad research practices, and that's what psychology is still doing. Indeed, it's very hard to get experimental evidence for a theory by measuring people, but if you know you can't, then don't do it. Wait until the means to do so become available, and try to find those ways instead. Unfortunately, that's not the state of psych. research.

There is of course also a more practical side. For psych and med interventions, you don't need to know how it works. If method A works better than method B, it's a good idea to try A before trying B. However, that research is also difficult, and many papers have been found wrong afterwards. While that's the way to go, it shows that even when attempting to establish superficial effects, the disciplines are failing.


I think you are ignoring a massive driving factor in both medicine and psychology; they are businesses.

Both fields, scientific as they often try to be, are subject to the sway of funding and profits more-so than other sciences like physics (as an example). Vague and often unsupported claims sell medical and mental health products, making it a profitable venture whether the product actually works or not.

I say this as a former student of psych, degree and all. The mental health industry has exploded in the last ten years or so, yet not much new thinking has been brought to the table at all. It's a lot of old standards being repackaged as a revolutionary solution (looking at you CBT) but being sold as mobile apps, hooking people with hope for relief in the form of a convenient, easy-to-use package, the same way medicine does when they add a bit of caffeine to acetaminophen and call it a solution for migraines that you don't have to visit a doctor to obtain. Once a field is driven by profit growth, it becomes this twisted, ersatz version of it's former self and this works because people are so desperate, they'll buy into just about anything you offer that might make them feel slightly better.


I would absolutely say that medicine is not science - or at least that it is very bad science with a low probability of being reproducible - and anyone with even tangential exposure to most medical research would do the same. Small sample sizes, confounding variables in treatment, and non-random assignment are the norm, simply because practicing medicine is not the same as performing scientific research.

Medicine is the practice of healing sick people, and (even if they wanted to, which most do not) it is very, very hard for doctors to convince an IRB that it's a good idea to give sick people a placebo. Also, despite some attempts to perform placebo knee surgeries [0], there are many medical interventions where it is simply impossible to do things like a basic RCT.

[0] https://www.nejm.org/doi/full/10.1056/NEJMoa013259


Would people also suggest that Medicine is not a science

I think that depends on whether those people believe that evidence-based medicine is scientific.


When the evidence is sketchy or fabricated (or p-hacked), as in psychology, the label "evidence based" doesn't mean very much.

Just look at decades of research into nutrition, and note that we haven't moved very far beyond "vitamin c prevents scurvy" and "too much of anything is bad". Even identifying at what point the amount of things like "too much" of salt or cholesterol crosses the line of "too much" remains contentious.

Certain diagnostic fields have obviously grown leaps and bounds, as have certain categories of medicines. On the other hand, there are counter examples aplenty.


The fact that the term "evidence-based" science exists in medicine research is an indication of a big problem to start with.

I understand the premise of the idea and that more scientists in the field are trying to make their research more rigorous. But, this also indicates that the research that has been done until recently was NOT "evidence-based", hence not very credible and reproducible.


I believe evidence-based medicine is possible, maybe, with enough practice.

I doubt it's to the same degree, maybe if you include cases like this https://x.com/cremieuxrecueil/status/1844596459162812869

That shouldn’t surprise anyone on the medicine end. People across the industry, from reviewers to those writing textbooks, are receiving money from those they’re reviewing. They’re incentivized to lie to make their sponsors look good. It’s the opposite of science.

Now, we come to the root of the failure. God’s Word tells us to test the source and the content of a message. Certain sources are all about ego, money, or pleasure. They’re willing to manipulate others for selfish gain. Simultaneously, God’s Word teaches habits and traits that honest people should possess. So, we look at their behavior to spot the warning signs that their content should be dismissed or simply reviewed more.

Applying the ancient wisdom to organized science, I found that this problem was pervasive. The institutions, both funding and research, had biases that caused more of specific types of work to be created. Then, there were biases at the individual and group level. These aren’t always bad but are rarely considered.

Further, there were three incentives driving lots of bad science: financial incentive called “publish or perish;” citation index or scores; sensational media. Two of those incentivized cranking out lots of low-quality papers that pressed the right buttons to get more money or citations. The third incentivizes specific claims that receive fame. All three usually penalize, by finance or fame, both steady, grunt work and replication of existing results.

So, rather than the scientific method, what’s happening is more like a game of TV show with winners and losers. It always looks like it uses the scientific method with some amount of real science in it. There’s also some portion of useless work, necessary work not happening, fraud, and censorship. These aren’t random: they’re baked right into the incentives and biases of the system.

Realizing this has made me wonder what I can trust or what I even know out of prior, scientific reporting. Fortunately, we’re blessed that most facts are so unimportant that being wrong doesn’t hurt us. The fields we depend on day to day are close enough to the truth that the products work acceptably well or don’t hurt us. Past that, I wonder what it will take to get to a point where organized science is actually doing science. Consistently.

I hope we achieve this. I worship Christ, not science. It is one of many forms of knowledge, many gifts, that make our lives better if used correctly. I thoroughly enjoy real science. I just think it’s in huge decline with society getting more and more dependent on the fake forms of it.


>Would people also suggest that Medicine is not a science?

Yes, I would. If one looked closely at it it's got the trappings of religion. To give you a guesstimate I'd say 3/4 of medicine is pseudoscience. either completely useless or actively harmful.


Because you don't even know what Freud actually did.

He assumed the human is a machine and used _analytical_ thinking trying to understand it.

Yet you think the interpretation of dreams is just BS. Either you only read secondary literature or you have a deficiency in reasoning.


Oof. What was the point of that article ? It felt like an unnecessary tough read.

I found it insightful and pleasant to see that there are people inside of a an important field being intellectually curious and honest about the direction of their field of study.

That’s the promise that initially brought me to the article, and I like the reason of the article. I’m just not sure I got more info from the title than the content of the article itself. But that is maybe just me, another fellow psychologist.

I agree with the authors identification of the problems in the field, but I'm not sure about their conclusions, or 'ways forward', which are

1. Debunk 'folk' psychology, with comparisons to the illegitimacy of 'folk' biology and 'folk' physics

2. Shake things up- meaning don't be afraid to question established dogma regardless of reputational risks

I, a completely unqualified internet commenter, will give number 2 a try. I'd argue that psychology is a folk science, which is to say its not a science at all, but an art. As we've recently discovered, a massive swath of psychological studies are non-reproducible. So maybe we shouldnt treat psychology as if it were a rigorous scientific pursuit, but a philosophical one, or even a therapeutic one (IE. make it synonymous with psychiatry). Leave the science to the neuroscientists, who can quantify and measure the things they're studying (I understand there is some overlap between these fields sometimes). If your study consists of asking people questions and treating their answers as quantitative measurements of anything, I don't know- it feels like something has been lost in the sauce there. Too many variables to draw any meaningful conclusions.


Philosophy is often very well grounded, often annoyingly so. We gave up on it because it turns out you can create iPhones if you ignore the philosophical problem of induction, go do science instead, and assume the laws of physics won’t try and conspire against you.

Rather, I wonder if Psychology would be better thought of as something even less grounded than science. Something where we’re just are happy with an accumulation of stuff that’s happened to work well enough, without pretending that we’re hunting fundamental principles. Something like a profession: Engineering, Doctoring, that sort of stuff.


Why would the problem of induction prevent one from inventing things?

Also, where did you learn it was given up on, and that is the reason why?


Neuroscience isn't immune from non-reproducibility, for example this https://news.ycombinator.com/item?id=41672599 from just a couple of weeks ago

What a stereotypical hackernews comment, wow.

> I, a completely unqualified internet commenter,

Just leave it there.


How would people learn if they don’t get feedback on their ideas?

Why go out of your way to gatekeep random internet comments?


Sometimes you should make the effort to learn before sharing your idea with other people.

So many people will blindly walk forward in the dark, completely in ignorance, and then get upset that people ask them to light a candle first.


Care to enlighten me? It's not like I stated any of that as fact, it was qualified with an admission of my own ignorance- I'm open to being corrected. Or are you just demonstrating what happens when established dogma is questioned in this field? If so point taken, I can see why they're having problems

Which established dogma do you want to challenge and why?

Moreso 'question' than 'challenge'- but it seems like the idea that psychology is a hard science at all is sort of a baseline assumption, or dogma. This article goes into great detail on all sorts of issues in the field, but stops short of questioning whether or not the whole thing could even be classified as scientific. I'd argue that the reproducibility crisis throws that into question to some degree (though that crisis apparently extends into 'harder' sciences as well, so maybe not?)- And intuitively, human psychology just doesn't seem like something you can quantify, at least not to the level of granularity required by the scientific method. That is, unless you're measuring the activity of neurons, synapses, hormone levels, any physical measurable phenomena, to draw your conclusions- and I'm not sure how much of that is done in psychology as opposed to neuroscience

Psychology has always been considered a soft science.

I think there's a common pattern on Hacker News that goes something like:

A: Overly broad generalization of a huge body of work put together over 100 years by tens of thousands of professionals

B: Ugh, hate this take from armchair experts

A: Okay, then give me all the examples! Otherwise you're proving me right!

I happen to think your overly broad generalization is more right than wrong, but I also recognize the silliness of asking to be "enlightened" on an entire branch of human endeavor via internet comments. This is a problematic argument form, and someone calling out this behavior does not prove you right.

So let's be clear about what "enlightening you" means. If your argument is "psychology is based on a fundamentally flawed/useless study design (surveys) and we can never learn anything real from it", then a few examples of reproducible, interesting, not a-priori obvious results from surveys should be sufficient to show that we actually can learn real things from surveys. (And be careful not to fall into the "I could have told you that!" fallacy.) Luckily, this question was already asked on Reddit, and I think there are some strong examples:

https://www.reddit.com/r/AcademicPsychology/comments/qktt6h/...

On the other hand, the field is absolutely rife with problematic study design and even some entire psychology departments (e.g. Stanford) seem to be completely rotten. The most salient example of this is the "implicit bias" studies that came out of Stanford. Their study design was something like:

Task 1: Associate good words with white/Christian/American themes as fast as you can

Task 2: Associate bad words with "foreign" themes

Task 3: Associate good words with white/Christian/American themes again

Task 4: Associate good words with "foreign" themes

And the result is: you're racist because Task 4 takes you a few milliseconds longer. It never occurred to them (or it did and they intentionally forced the result) that in Task 4, you're literally unlearning what you've just practiced 3 times. It was one of the most blatantly bad studies I've ever seen in my life and I didn't see anyone else calling out how problematic it was, because Stanford.

So in general I actually agree with your take: the field is rife with junk science, some of it obvious, and almost certainly some of it intentional. But please also recognize that "I'm an expert in tech and therefore everything, and if you can't prove me wrong in an internet comment then that proves me right!" is a very problematic argument style. It sounds like you're trying to prove yourself right, and a much more efficient way to get smarter is to habitually try to prove yourself wrong.


I appreciate the productive answer- You're right, re-reading it now my tone was more argumentative than inquisitive- Itd be foolish to dismiss such a large body of work as 'useless' and I hope it didn't come off that way- Of course understanding human psychology is immensely useful for all sorts of reasons

To be fair, most studies of implicit bias are randomly ordered on a trial to trial basis.

I agree with the overall thrust of your argument, but:

> be careful not to fall into the "I could have told you that!" fallacy.

That's not considered to be one of the standard logical fallacies as far as I know. Why would it be fallacious? Social studies are rife with findings that are either extremely obvious to everyone, extremely obvious to conservatives specifically (because psychologists are nearly always on the left), or extremely obvious to anyone who reads the study design.

I recently wrote an essay about why replication studies can't fix science [1] and one of the problems cited is the prevalence of studies that aren't worth replicating because "I could have told you that". Examples include silly papers like [2], which is literally titled "People's clothing behaviour changes according to external weather and indoor environment" yet somehow manages to also say, "It is evident that further studies are needed in this field", or [3] saying that the average male student would like to be more muscular.

But there are less silly examples which crop up due to the ideological bias in the field. Academics purge any conservatives they find, meaning that social studies spends a lot of time and money investigating things that are considered obvious outside of far left spaces. Jonathan Haidt is famous for arguing that this is a problem (albeit, not actually doing anything about it). As an example highly apropos to this thread, psychologists recently started discovering that stereotypes are usually accurate. Much other work in psychology is built on the suspiciously circular premise that stereotypes are either fictional and thus mere folk intuitions, as Mastroianni would put it, or are accurate only because people believe they are accurate (the field of "stereotype threat" is like this). On the left the idea that stereotypical achievement gaps are socially constructed is considered obvious and a matter of faith, to people on the right the opposite is true: idea that they reflect actual truths about reality is the obvious idea.

So even if you set aside the offensively wasteful, there's still a lot of scope for study claims to be considered obvious by some and not by others.

[1] https://blog.plan99.net/replication-studies-cant-fix-science...

[2] https://www.sciencedirect.com/science/article/abs/pii/S03601...

[3] https://openurl.ebsco.com/EPDB%3Agcd%3A4%3A12322516/detailv2...


> That's not considered to be one of the standard logical fallacies as far as I know

I don't really care whether it's on The Official List of Logical Fallacies™ or not, and in fact caring too much about this list is itself a bit of a fallacy. Nor do I necessarily consider "I could have told you that!" necessarily a logical fallacy; more like an emotional fallacy. (But humans use emotions when trying to understand things!) I consider a fallacy to be something which is an "attractive but wrong" step in an argument or rationale. There are two reasons why I consider it a "fallacy" by that definition:

1. Humans dramatically over-estimate the obviousness of an idea after they've already heard it. Once you know the answer, you basically become immediately unable to estimate how obvious that answer was beforehand. This is highly evident in mathematical proofs, e.g. when "obviously right" things turn out to be wrong, often with an "obvious counterexample". Both sure feel completely obvious, depending on what you know!

2. Even obvious things are worth testing. Plenty of things that seemed obvious have turned out to be wrong. This is not evidence of wasted funding. Obvious things can also be related to less obvious things. Your 3rd example shows this: there's a 2nd related hypothesis they're testing: "men would believe that women would find a more muscular shape more attractive than women actually report". So men tend to want to be more muscular, and also men tend to think women find high muscularity more attractive than they actually do (or they think women's "ideal muscularity" is higher than it actually is). This may be an example of "overturning our intuitions" whose understanding could improve outcomes -- if it replicates, anyway. Hardly an example of a pointless study.

That being said, humans actually do seem pretty good at being able to "tell you that" beforehand. There are some fun quizzes you can take [1, 2] to see if a study replicates beforehand, and you'll probably do pretty well on them. But that doesn't necessarily mean that we shouldn't do the tests anyway! We can still be surprised.

> Academics purge any conservatives they find

This is extremely dependent on the department and institution, and in general is way overblown. Yes, this can be a problem in some departments in some of the social sciences. On the other hand, in economics departments, it can be liberal ideas that are verboten. There are fads and politics everywhere, but mostly science is doing OK at considering ideas on their actual merits -- eventually. Meanwhile a lot of people lob angry criticisms at academia for rejecting their bad ideas because they're bad and wrong, assuming

> psychologists recently started discovering that stereotypes are usually accurate

I suspect this is way too strongly worded for what's actually being found (and replicated). Citations would be very much appreciated. Be wary that slight changes in the mean of a population can be detected in tests with p < whatever epsilon you, and also have oversized effects on the tail ends (e.g. professional sports), but give you almost no predictive power for individuals you meet on the street. If "stereotypes are usually accurate" means "population X is slightly more likely to do Y with near-certainty", that does not mean "most X do Y" nor even "a random member of X is significantly more likely to do Y than a random member of not-X". One of the reasons these kinds of studies are considered problematic is in how easily they can be misconstrued to justify racism.

[1] https://mru.org/teacher-resources/active-learning/will-it-re...

[2] https://80000hours.org/psychology-replication-quiz/


Sure, here you go. This is the first I found with a few seconds of searching so I don't claim it's the best citation but it gives an overview of the research on stereotype accuracy:

https://spsp.org/news-center/character-context-blog/stereoty...

Note the reference to over 50 studies, and stereotype accuracy being amongst the most replicable of findings in social psychology. Not very surprising given they're literally asking "are things most people believe to be true actually true" - this is a question that's going to obviously yield a lot of big effect sizes and high levels of replicability, but it's also trivial by definition.

> in economics departments, it can be liberal ideas that are verboten

That seems doubtful, depending on how you define "liberal". Academic economics still has a notable left leaning bent. If we check out the last two issues of QJE, one of the best known economics journals, we see a large number of papers on typical liberal fascinations that have little to do with conventional economics, things like gender pay gaps, domestic violence against women, how to socially engineer people into taking vaccines etc:

https://academic.oup.com/qje/issue/139/3

https://academic.oup.com/qje/issue/139/4

Also, the basic premise of academic economics is that you can treat the economy as something knowable and controllable from the outside, whereas a more libertarian take would be that the economy is the process of everyone collectively figuring out what's both true and desirable, thus you cannot "step outside" the system in any meaningful way by definition. So the very nature of studying economics in a university is to some extent a left wing starting point.


In a similar way Im wondering what gender studies would be called?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: