Hacker News new | comments | show | ask | jobs | submit login
The Lifespan of a Lie – Why can’t we escape the Stanford Prison Experiment? (medium.com)
422 points by fermigier 40 days ago | hide | past | web | favorite | 218 comments



Here's the true experiment:

In surveys conducted in 2014 and 2015, Richard Griggs and Jared Bartels each found that nearly every introductory psychology textbook on the market included Zimbardo’s narrative of the experiment, most uncritically. Curious about why the field’s appointed gatekeepers, presumably well-informed about the experiment’s dubious history, would choose to include it nonetheless, I reached out. Three told me they had originally omitted the Stanford prison experiment from their first editions because of concerns about its scientific legitimacy. But even psychology professors are not immune to the forces of social influence: two added it back in under pressure from reviewers and teachers, a third because it was so much in the news after Abu Ghraib.


In other words, the field just has very low standards for truth, in a scientific sense.

This seems to go up and down the field... From academic publishing (replicability crisis) to undergraduate teaching. Psychology just doesn't have a scientific perspective on truth and knowledge.

Instead of a need (Abu graib and timely relevance) for knowledge triggering research and the accumulation of knowledge, it triggers the field to accept bogus knowledge because they need something.


> In other words, the field just has very low standards for truth, in a scientific sense.

The field has little to no scientific sense. It's why psychiatry/psychology is a soft science. It isn't based on the scientific method or reproducibility and testing. It's mostly based on consensus driven by a handful of powerful practitioners.

https://en.wikipedia.org/wiki/Social_science

What sigmund freud did wasn't science. His oedipal complex theory isn't a scientific theory. There is no scientific test to refute it. You just accept it or not.

The biggest problem of our generation is the conflation of science ( hard science ) with pseudoscience ( soft science ). Because both has the word science in it, people give soft sciences far more credit than it deserves.

If anyone is interested, Richard Feynman had a very interesting interview about it.

https://youtu.be/tWr39Q9vBgo


Psychiatry and psychology rejected Freud as an unscientific laughingstock a long, long time ago. This shouldn't be trotted out as a an example of how these fields operate today.

They do attempt the scientific method, they just aren't doing a very good job of it much of the time apparently. If you think Freud, psychiatry and psychology are so synonymous that they can be referred to interchangeably, try picking up a current psychology textbook.


> "Psychiatry and psychology rejected Freud as an unscientific laughingstock a long, long time ago."

In my experience few laugh at him. Most respect him, even if they don't believe him.


laughingstock was a not a perfect choice of words. People will still loosely refer to his more well known ideas casually. Some could be said to respect him to some degree, others not. But hardly anyone is confused about exactly how scientific his output was, and neither would any textbook published in the last 40 years.


psychiatry != psychology. Do not over generalize. Psychiatry has many problems, namely overprescribing pharmaceuticals, but is it not the same thing as "social science", it is an actual branch of medicine, and a psychiatrist has to complete medical school,psychiatric specialization, and residency.


Erm.... Idk. What was Freud then? These are certainly separate fields in terms of accreditation, but in terms of academia, they're really subdisciplines.


At least in the US...

A psychiatrist is a physician. They go to the same schools as future radiologists, surgeons, and general practitioners. They learn gross anatomy by dissecting cadavers. They work 80 hour weeks learning to perform medical procedures on actual patients. After that, as doctors, they receive specialized clinical training in years-long residency programs. They are licensed by state medical boards in addition to their own professional orgs.

A psychiatrist is authorized to hand you a prescription for antibiotics, for amphetamines, for opioids. Christ, a psychiatrist is authorized to detain you on their credibility alone for upwards of 48 hours, arguably exceeding the legal detention power of the police.

Psychologists are certainly competent, well-educated experts in their discipline, but they don’t and can’t do all that. Of course the two deal with loosely related subjects, but there is very little comparison between them in any practical sense.


Sigmund Freud was a 19th century doctor. The modern medical field is not comparable to the one 150 years ago when the treatment of most things was "go breathe near the ocean for a bit".


Also, no serious 21st century psychologist or psychiatrist would consider Freud's theories to have any validity other than for interpreting Shakespeare plays. Freud's psychodynamic psychology has been long eclipsed by behavioral, cognitive, and evolutionary subfields that have much stronger experimental standards than Freud had (most of his hypotheses came from case sudies of a handful of his patients).


Um I have a family friend who is a practicing psychoanalyst. They are still eligible for license in all States and remain a popular form of psychotherapy. So don't be so sure the field has moved past Freud. They still think his theories are valid enough to use on people.


According to Wikipedia, it has evolved since Freud, so it may not be valid to say it hasn't moved past Freud. On the other hand, I'm sure homeopathy has changed in that time as well.


tbh, going to take a nice couple of breaths next to the ocean sounds like it would do me a lot of good at the moment.


For relaxation absolutely - for polio, not as great.


Former psych major here. At my undergrad, the psychiatry and psychotherapy tracks were actually in separate departments (neuroscience and psychology respectively). There was actually a world of difference between the two, and it's one of my post-graduation regrets that I didn't choose the more rigorous / intellectually satisfying department for my major.


The 'scientific method' is mostly a pop-culture myth. Mostly used to discrediting whatever views one does not like. There is value in predictability, but one must understand its limits. It's one thing to work with electrons and planetary systems. It's easy to be cocky about how successful physics is. But that is far from covering the whole of reality. Our minds, whether you believe that means your brain or soul or both, are much more complex than the hydrogen atom.

It's one thing to appreciate what Richard Feynman contributed to physics, an another to understand his behaviour in striptease clubs. It's one thing to explain black holes and stuff, and another to analyze the correlation between IQ and success, BIG FIVE personality traits, disgust, political views and biases and what not.


That our brains are more complex that hydrogen atom is a fact, but that does not invalidate scientific method. It simply means that we need much more time, more research, more data and more complex theories to understand brain than we need for an atom.

Psychology today reminds me the state of medicine before bacteria were discovered: we do some semi-magical rituals that sometimes work, but they do not produce predictable results, because we are yet to discover the real underlying mechanisms of human mind.


I think if you dig in to many modern medical treatments, you'll find that the true reason they work is unknown (and in many cases, there is little evidence that they work at all).

Even in the hard science of physics, we we can understand the cause and effect, but the fundamentals of nuclear forces, gravity, magnetism are all a complete mystery.


But there's certainly hope that someday we'll be able to explain through physics how come we hope to explain hope and understanding through physics.


Criticisms of Marx & Freud's "social science" is basically the modern standard definition for science. I agree, as a criticism.

But... It is simply not true that psychology practices academia as a non scientific discipline. The expirement. Publish data. Etc. It is just substandard data.

Freud Oedipal complex theory may just be nonsciece but this Stanford prison "theory" is scientific, it's just scientific fraud.


Imho, the conflation of pseudoscience with fields of research that simply can't be modeled as precisely as physics or chemistry isn't super helpful. I feel that the label of pseudoscience should really be reserved for guerilla implementations of mythological practices that solely rely on individual and anecdotal, if not purely placebo influenced experience. Things that apparently offer treatment for critical illnesses, but offer no framework for testing the results or basis for it working. Acupuncture, homeopathy, Chiropractic, anything to do with life forces, Naturopathy, etc..


I found when I was studying Psychology, that it's very much focused on pragmatics rather than intellectual rigor. i.e. if an explanation helps provide an effective therapeutic model then it's considered "good enough".

I'm not saying it's right. In fact in terms of evaluating the field "as a science" it's wrong. I guess that's why it's a BA not a BSc ... probably also why various other more rigorous subfields such as neuroscience have split off to become their own disciplines.

I think most people who have studied psychology will get this, and critically evaluating all the bullshit around you should be one of the fundamental skills that you learn. Unfortunately though, things are taken far more literally once they get "out into the open".


Agreed. Approaching psychology with a permanent kind of rigour simply doesn't work. In order to change the mind that is not functioning in the way one desires, one has to believe that mind can change. Presenting information as though it's impermeable, absolute, matter of fact truth since the event(?) that allowed humans to first begin to think is absolutely absurd. If humans can go through all the different variations of thought they have gone through thus far, why do we work with the assumption that there's any absolute truth to the nature of thought to begin with?

Therapeutic models have to be adaptive to the individual one is guiding towards mental wellness (or whatever the goal is). Someone undergoing the process of regaining trust in their own mind has to pass some of their own personally designed tests. Otherwise they are just consuming information that already exists, and that's back to the original problem - one can only trust in information that comes from others, and not from the individual self (unless it matches perfectly with others). That's paradoxical, and if all oneself can do is reproduce what already exists, one can not prove to oneself that one is different or has changed or has improved, from 'before therapy'. So psychology is going to keep changing. It has to.

That's what students of psychology are interested in, right? Why force the model they learned that at one point, made absolute sense, to the student, at a prior time, onto others? What is the purpose of that?

I think that stuff is obvious when it's out in the open, but every student, researcher, and doctor of psychology has their own fears and 'issues' to deal with. What if I get it wrong? What if I screw it up really badly for someone else? Didn't I go into this to help people? And I think those kinds of fears, if they are not dealt with in a responsible, respectful manner - can really control the dynamic far more than all the knowledge one has access to, and all the thought and reason one can muster.


You've never seen egregious errors in physics or cs textbooks? Textbooks are often bad. That's not unique to psychology.


In my experience, CS textbooks often fail to expose concepts in logical order. Whenever I read over a paragraph in one of them, there always seem to be one or two terms that get thrown out there, but that aren't introduced until four chapters later. My impression is that CS texts overall presuppose that you are already knowledgable in CS.


This is a fascinating article about how much of the famous Stanford Prison Experiment was a sham.

> The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.

It seems that we fell for the narrative fallacy every time this "research" was used as an explanation for behavior in the real world.


Interesting comments about a similarly famous study with a similar gist, the Milgram experiment:

"Many other studies, such as Soloman Asch’s famous experiment[1] demonstrating that people will ignore the evidence of their own eyes in conforming to group judgments about line lengths, illustrate the profound effect our environments can have on us. The far more methodologically sound — but still controversial[2] — Milgram experiment demonstrates how prone we are to obedience in certain settings. "

[1] https://www.simplypsychology.org/asch-conformity.html [2] https://psmag.com/social-justice/electric-schlock-65377


I am a bit suspicious of the conclusions drawn from the Asch experiment. The subjects are put in a situation in which they are led to believe that one of two things is true: 1) I am unable to tell which line is longer 2) Every other person in this room is unable to tell which line is longer They are reported to have acted upset, which leads one to think they were not wanting it to be (1), but the only other option (they knew of) was (2), which is, in fact, even less plausible.

So, really, in that (very artificial) situation, concluding that they have lost the ability to tell which line is longer, while unlikely, was the least unlikely option. So, they chose to believe (1) over (2).

Their only mistake was in not thinking of the possibility that every other person, including the researcher, was trying to deceive them. That's not at all the conclusion that Asch (and others since) have drawn from it. I believe that study could replicate, but I don't think it means what they say it means.


The issue I have with the Asch experiment is that it also conflates conforming and belonging. External group pressure vs a want to be included.

It seems to me to be testing the second while being reported as the first.


Vsauce's Mind Field series replicates this experiment, in the traditional sensational way he does everything. It's still pretty interesting to actually watch the people and their facial reactions. Still, draw your own conclusions.


I've watched the whole video of the Milgram experiment. People claim those who were the teachers knew it was fake, but interviews years later have people claiming they really thought they killed the person in the other room.

Not to mention his experiment was replicated by several universities around the world[1], although due to modern ethical standards, it would be impossible to reproduce ethically at most universities today[2].

I don't think Milgram falls into the same category, especially with all the work they put in at Harvard in the controls (the learner was a tape recording; so everyone heard the exact same thing. It was the same professor in the room with him; always replicating the same dialog).

This is in direct contrast with Vsauce's trolley problem experiment, which had way too small a sample size, and who forced a response for one person by changing the conditions of the experiment (there was no one outside when he tried to get help).

[1]: https://en.wikipedia.org/wiki/Milgram_experiment#Replication...

[2]: https://www.psychologicalscience.org/observer/replicating-mi...


> "People claim those who were the teachers knew it was fake, but interviews years later have people claiming they really thought they killed the person in the other room."

Some of the teachers undoubtedly thought it was fake, others did not. And importantly a majority of the teachers were not properly debriefed until several months later (some never were). The "dehoaxing" most received immediately following the experiment was itself another hoax which in some cases was meant to persuade the teacher they had in fact been shocking somebody (but that the shocks were only very slight harmless shocks.) The dehoaxing was meant to introduce the teachers to the learner, to demonstrate that he was alive. Some did not even receive this dehoaxing, presumably because procedure was violated, which explains why a handful of them might believe they'd killed somebody.


> And importantly a majority of the teachers were not properly debriefed until several months later (some never were)

No. No. No. That is entirely wrong. Watch the original video. Every one of the teachers is introduced to the learner after the experiment and they were told no actual harm came to them. At least in the original Milgram iteration, all of the teachers knew by the end.


And self-reporting about participation in a famous experiment, folks are known to exaggerate.


There are some folks who are pretty dead set on maligning Milgram alongside Zimbardo. Namely, Gina Perry wrote a book criticizing Milgram and his experiment's validity [1]. I haven't read it, but looking at some of the reviews, it seems most likely that it's a mixture of truth and the typical schlocky scientific journalism I've come to know and revile. [2]

[1] https://www.amazon.com/Behind-Shock-Machine-psychology-exper...

[2] Funny anecdote- when I was still in science (brain imaging) a reporter from a major news outlet interviewed another member of my lab about what brain activity would look like in a dead person's brain. The question was entirely serious.


[2] Brain activity appears to continue after people are dead, according to new study. (9 march 2017)

https://www.independent.co.uk/news/science/what-happens-die-...


The news article you cite is a bit over dramatic- the distinction between clinical and brain death has been around for a while. None of what is in there is really news, although it might be somewhat remarkable if they found activity 10 minutes out; then again, this is research from the medical community, which as others have pointed out is often quite methodologically flawed.

You too are being slightly too generous towards scientific journalism- a typical non-scientific audience wouldn't make the distinction between clinical and brain death, and the reporter likely wasn't either.


Might [2] have been related to this?

https://www.wired.com/2009/09/fmrisalmon/


You might be giving the reporter too much credit. It's very unlikely that she was familiar with that study before asking her question.


Minor point:

>...with all the work they put in at Harvard

I think the experiments were done at Yale.

Milgram wrote a book about the experiments and I agree, (at least as described in his book), they worked hard to test the hypothesis. (Setting up experiments outside the university, using both male and female experimenters, Having the experimenter play the student, etc.)

https://www.amazon.com/Obedience-Authority-Experimental-Pere...


If we're interested in improving the world -- improving things like human dignity, welfare; and minimizing human suffering -- I think this is one of the most important concepts to understand about human nature.

It's the same concept I try to illustrate by noting the NASA Challenger disaster was not a case of an unknown, but one where groupthink murdered 7 astronauts.[1] The desire people have to conform to authority is extreme.

I can even think of a remarkable personal example from growing up, when a boy in class was given the punishment of having to sit next to an often-bullied girl. The teacher was not above the bullying, he had conformed to it. That must have had a detrimental effect on a young girl's self esteem.

So if we're still murdering, toeturing, and dehumanizing people in order to conform to concepts of authority and hatred, how much have we grown from our past?

Isn't this the same concept that we're supposed to disagree with when it came to the Nazis, North Korea, etc.? That we are free and just society, and we fight to defeat those who aren't?

1-https://mobile.nytimes.com/2003/03/09/weekinreview/the-natio...


I think characterizing the Challenger launch decision as murder is unfair. At worst it was manslaughter.


I believe groupthink killed those astronauts. However, I'm not really focusing this argumemt around the idea of bringing leaders at NASA up on murder charges.


It's interesting to contrast this with yanny / laurel / dress color


That's an interesting perspective. I myself "want" the drama to be true not because I want to be unaccountable for my actions, but rather:

1) I want validity that the pressures I feel are normal. Not so that I can give in to them, though I'm sure "excusing" past mistakes would feel good, but so that I feel the difficulty of doing the "right" thing is legit.

2) mostly, however, I want to excuse OTHER people. I want assurances that, however buggy, people aren't individually evil, that something pressures them in such a way that when my judgement of people (as a group) is inaccurate there is a "rational" reason for it.

Otherwise I can't explain things like neonazis, gamergaters, certain "news" and their zealots, etc. Without this one supposed flaw of human nature, the existence of these groups become far more terrifying.


I largely agree with your perspective - with a couple of differences. Most importantly, while I see what you're getting at with #2, I don't think disproving the SPE makes it so you can't explain things like neonazis, etc.

The line that resonated with me was:

> According to Alex Haslam and Stephen Reicher, psychologists who co-directed an attempted replication of the Stanford prison experiment in Great Britain in 2001, a critical factor in making people commit atrocities is a leader assuring them that they are acting in the service of a higher moral cause with which they identify — for instance, scientific progress or prison reform. We have been taught that guards abused prisoners in the Stanford prison experiment because of the power of their roles, but Haslam and Reicher argue that their behavior arose instead from their identification with the experimenters, which Jaffe and Zimbardo encouraged at every turn.

So I don't think your second statement is disallowed by this article's findings. Most people randomly chosen off the street aren't necessarily evil - but some can be. And those who are can have a profound influence on those who aren't. Especially when there's a concept of a "greater good" on the line. (Insert Godwin's Law subject reference here).

For me, that's the big lesson - be suspicious of the "greater good."

[edited for formatting]


I don't think people are drawn to it because it's giving an excuse for bad behavior. The Stanford experiment resonates with a lot of people because it mirrors civilization as a whole (hierarchical class based system)

The strong do what they can, and the weak suffer what they must.


Ten years earlier a book was published which was a work of sociology based on participant observation, an approach many here would regard as unscientific. It was very much an explanation of behaviour in the real world.

Erving Goffman's Asylums [0], in its four essays, identifies the creation of social roles and related rituals as the raw material of the social institution. In this case the institution was a real mental hospital.

My take on the kind of psychological experiment under discussion is that by studying an artificial social situation they have already failed. The petri dish of a very small mock prison on a campus just doesn't tell us anything very useful. Yet we can see eye catching results being amplified and then becoming part myth and part morality tale. However, the belief that human agency is contingent and flawed, that we often don't know how we know the things we know[1], or whey we do the things we do remains.

[0] https://en.wikipedia.org/wiki/Asylums_(book) [1] https://www.amazon.co.uk/Our-Knowledge-Growth-Wittgenstein-R...


> we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do.

opposite. It shows that we have to be held accountable - without credible threat of such accountability the reprehensible things will be done.

>it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance.

not even close. It means that person A putting person B in the position of power without proper checks on that power and without effective system of accountability for the power abuse is guilty too of whatever power abuses person B would commit.


Setting aside that part of the article's point is that it's unclear what (if anything) the experiment shows at all, that's not really a rebuttal. You're agreeing that systems create behavior, and adding on that therefore those in the system need to be restrained with the threat of accountability.

But you see how even if you believe the experiment argues that systems need to be held accountable, that still lets individuals off the hook? "It's not my fault, I was only following orders. Someone should have been holding me accountable." In fact, the article has a direct account of Zimbardo's experiment being used to defend a criminal. It turns out later that the defense was largely a lie.


>You're agreeing that systems create behavior

i don't think systems create behavior. They do though unleash and amplify. They also can force. I don't think any doctor, with their deeply ingrained "do no harm", would administer a shock in the Milgram for example.

> "It's not my fault, I was only following orders.

I think following an order backed by a credible punishing force is separate subject.

>Someone should have been holding me accountable."

That comes as an attempt at defense when you're already being held accountable, and which is basically equivalent to an admission that you've made a error calculating your chances of being held accountable, and it is puzzling to see why/how some reasonable people would still fall for it and accept such admission as a defense whether in legal or in moral/ethical space.


Doctors administer lethal injections in many states that institute the death penalty. Nobody forces them to accept such a role.

Someone I know used to specialize in physically rehabilitating torture victims...for the sake of getting them put back together just enough so they could withstand further torture.

Systems create behavior. Its a large part of how cycles of abuse persist.


You're not disagreeing with either the commenter you're replying to or the article itself.


>Just as the Gospel promised to absolve us of our sins if we would only believe

Ironic example... and wrong.

BTW is not for "believe", but for repent, change the old ways and continusly live good for the rest of life. And have faith... too.


You're probably right, but please let's not get into an off-topic religious debate. Those turn into flamewars.


Is not about that, but how persistent some idea can be, because honestly, how fact-check everything?


I believe that you didn't mean it to be that, but experience has shown that it turns into that with high probability if the discussion is allowed to proceed. An internet forum, or at least this internet forum, isn't the medium for it.


I think I'm at the point now where i will consider the whole of psychology a castle of dishonest and incompetent studies until proven otherwise. The usual benefit given to scientists of the assumption that is not Alchemy had to go for psychology. It's too important and the abuse is too widespread. It really seems pervasive in the discipline at this point.


There's also the reproduction crisis[1], if you weren't aware of it already.

When your field can only have less than half of its top papers reproduced in full, something is very wrong with your field. It's to the point where one can safely assume that any psych paper (but especially social psych) is garbage until such time as it's been replicated a time or two.

[1]: https://www.nature.com/news/over-half-of-psychology-studies-...


> It's to the point where one can safely assume that any psych paper (but especially social psych) is garbage until such time as it's been replicated a time or two.

Good, because it is probable that what we should assume this for any science[1].

[1]: http://journals.plos.org/plosmedicine/article?id=10.1371/jou...


Everyone assumes that science is filled with neat, well-manicured lab notebooks, pristine methodology and fastidious characters.

In reality, the current scientific culture is one of rampant distractions, where pretty much everyone is running around like a beheaded chicken. Woe unto those that might point out that a prominent researcher with tenure isn't using sound methods. That, as well as a generally toxic culture that makes Wall Street look good, was part of why I left.

There have been some efforts towards creating a better culture (in particular, I've had my eye on Popper [1] and the "SciOps" movement and Software Carpentry [2], as well as various open access efforts). At the end of the day though, scarcity in terms of funding plus lack of job security plus medieval power dynamics are holding any substantial cultural shift back.

[1] http://falsifiable.us/ [2] https://software-carpentry.org/


Not only is the replication crisis not solely about psychology, the numbers are far worse than half for fields like chemistry, biology, and physics[0].

[0] https://en.wikipedia.org/wiki/Replication_crisis#Overall


This is completely incorrect, and that Wiki page has been written to be highly misleading to lead people to the same mistaken comparison as they're comparing entirely different numbers that look directly comparable.

What that says is that e.g. 69% of scientists working in physics and engineering had failed to replicate some experiment at least once in their career. In other words if they performed 1000 replication attempts of other scientists' experiments and failed to reproduce the same results once, they would be counted in that survey. By contrast in psychology, one hallmark experiment tried to replicate the findings of 100 key studies in psychology from highly regarded psychology journals. And it was unable to replicate the findings of 64% of the studies. What's that number for physics? No idea other than that it's going to be much lower.

Make sense? Wiki is, almost as a rule, just completely awful for contemporary or potentially controversial topics and this is a textbook example of that.


That's not what the numbers there are saying though. They're saying that over half of scientists have failed to replicate a single study (either their own or someone else's). It says nothing about the replicability of most studies.


The reproduction crisis affects all social sciences, and medicine to a certain degree. In fact anything where it's difficult to isolate and control for the phenomena are facing this. Climate science, Economics and food science are all suffering similar issues.

I think this speaks more to the nature of academia, and the commercialisation of publication than the validity of the field in and of itself.


And consider this: in psychology we actually have the possibility of running and repeating controlled experiments. Think how bad things must be in other fields where that’s not the case.


At least psych is addressing the reproduction crisis. Other fields are just ignoring it (cough, ML).


The Milgram experiments have been replicated many times in several different cultures. I wouldn't discount everything.


> The Milgram experiments have been replicated many times

More recent critical analysis has poked a lot of holes in Milgram [0].

Important work no doubt but not quite the smoking gun it has been proclaimed to be.

[0] https://www.theatlantic.com/health/archive/2015/01/rethinkin...


I started reading this link and this sentence popped out at me:

> It’s a phenomenon that’s been used to explain atrocities from the Holocaust to the Vietnam War’s My Lai massacre to the abuse of prisoners at Abu Ghraib.

This sounds an awful like the sentence in the Lifespan of a Lie:

> It has been invoked to explain the massacre at My Lai during the Vietnam War, the Armenian genocide, and the horrors of the Holocaust.

We seem to be using a lot of these experiments to explain the same things ...


Well the backdrop to the Milgram experiment was actually WW2 and trying to understand what happened. It was a key motivation.

It seems to me that a lot of the seminal social psychology studies happen around this time ...


But... it's taught alongside and on par with unproven theories, unreplicable expirements, and known to be fraudulent science.

I mean, just because it's in a psychology textbook doesn't necessarily mean that's it's BS... But that's not a decent standard. Even astrology clears that bar.


I’m more interested in the unquestioning belief that people seem to have with information provided by an authority, especially during the earliest exposure to a subject.

As the article states, the SPE is often introduced uncritically in introductory lectures. Uncritically to avoid muddying the waters (presumably) and in an introductory lecture because it is fascinating and likely to capture the imagination of a student (presumably).

I’ve encountered unquestioning belief in various degrees of bullshit taught this way, from the tainted views of history taught in elementary school, through to stuff like this in undergraduate education.

I think that students are particularly vulnerable to this at an early phase of exposure to a subject because they don’t have enough background to be critical, by the time they get that background this information is ingrained.


Certain irony here. Much of education then is a Prison Experiment until students develop the critic thinking skills to evaluate their professor's teachings. Not that we have a binary evaluation here, i.e. the prof is totally right or totaly wrong. More like shades of gray where some stuff is more believable than others.


I’m more interested in the unquestioning belief that people seem to have with information provided by an authority

Oh, the irony!


Watching this thread from yesterday, I feel that HN tries to dismiss not an experiment and its conclusions (which are indeed falsely true and inexact if argumentation against it is true). It tries to dismiss a phenomenon and fix this forever as in “no, we are not like that”. But what Zimbardo seemed to try to conduct is resembling a typical SU prison-camp. Where all the people inhabitating the entire area were given some rules (both law and “thief law”, perverted and mostly uncontrolled) and began to create a social mess. It wasn’t sadistic or directly harmful, but in these conditions people do smart or dumb moves that lead to big trouble (often referred to as “get into the odd”) to themselves or someone other, thus having a power impossible to have in a regular society, along with all long going implications.

While most SU prisons were not insane hell movie-likes at first sight, it is still a wrong move to state conditions and effects similar to Zimbardo’s as disproven in a wide sense simply because he messed this one up.

Main problem here is that one probably cannot make such an experiment without actually building a prison and hurting people (and that they know their deeds will not be punished). The only reliable sources are 20th century books on civil wars and prison camps, but these are a read too long for HN format, and a common knowledge on that is so vague that isn’t even worth discussing.


According to article, researchers themselves pushed guards toward being more aggressive on multiple occasions. E.g. resulting behavior was not just what emerged in absence of external control, but what emerged when you authority explicitly pressures guards to be tough and legitimize or praises that behavior.

That is pretty big difference in terms of meaning of results.


How different is that from a new prison guard being pressured by their supervisor or co-workers to be tough on prisoners? It indicates that a bad situation can be created by bad management, and a new person introduced to an existing bad system can turn bad as well.


>> The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.

This is odd because it's the exact opposite of the way I usually here the SPE narrative discussed: it's not a tale of redemption, it's a condemnation of humanity that purports to show how any of us have monsters inside of us that come out if given the chance.


It’s a dispersion of guilt and it allows people who do bad things to go along because the think others would do the same in their position.

Lots of people in the world have died rather than to go along with things they felt were wrong.

Don’t think that we all have monsters inside because we don’t.


Everyone has a different threshold and it seems to largely be a collective behavior, when outside of a group, people may think more critically. https://en.wikipedia.org/wiki/Threshold_model


I do not understand why a single experiment is considered good enough for research results in the social sciences. A single experiment with unexpected outcomes must be backed up by replication in order to start becoming interesting.

Even if you took the SPE at face value when it was first conducted, surely you have to repeat it before deciding if it is valid. What if the outcome only occurs once in every 100 times and they got lucky that first time? Surely that is somewhat important to know.

The most disappointing outcome is that nothing has really changed even after all this time. Each researched is only one great result away from making their career.


This is a false assumption about how social sciences work.

Like in other fields, academics are repeating experiments and trying to (in)validate results, including for SPE. [1]

That being said, social sciences != hard sciences in the sense that there are no hard laws because humans are unpredictable. That doesn't mean social sciences are worthless, but it does require that one approaches them with a different mindset than e.g. math or physics.

FWIW, I do agree that your criticism regarding the gap that exists from publication to repetition / validation has made many famous. Bombastic-sounding results -> headlines -> public perception -> increased social and academic status of the researchers. By the time the results are repeated and (in)validated it is often too late to reverse that. I don't think that pattern is unique to social sciences however - I keep seeing article after article contradicting each other - e.g. regarding drugs, food, fitness, etc.

[1] https://en.wikipedia.org/wiki/Stanford_prison_experiment#Sim...


What mindset is that, and how exactly is it different?


IMO - the mindset should be: social sciences are dealing with things orders of magnitude more complex than anything seen in hard sciences (except biology). Therefore, social scientists need to be much more reserved in their claims, and not much should be taken for a fact unless it's replicated in great many studies and has proven predictive power.

Whether this is the mindset social scientists have, I can't tell - but social sciences in popular media seem to be the exact opposite of that.


>Whether this is the mindset social scientists have, I can't tell - but social sciences in popular media seem to be the exact opposite of that.

I think it's just not a very human mindset. I've talked to natural scientists and social scientists alike. All of them want their research to succeed, they all want to get their PhDs, get good results for their postdocs, get publications going etc.

The incentive is just there to publish somewhat inflated results. If you could get a PhD for systematcally dismantling studies (which would be much more useful than most PhD studies), we'd have a replication crisis in every field.


Quote: "I think it's just not a very human mindset. I've talked to natural scientists and social scientists alike. All of them want their research to succeed, they all want to get their PhDs, get good results for their postdocs, get publications going etc."

And that is one of the major problems. Hubris and arrogance and thus bad requirements and expectations and thus cheating and lies and half-truths as normal behavior.

Also in economics: http://bilbo.economicoutlook.net/blog/?p=39198

Quote: "Further, in my field (economics) one can never really get a publication if the research only produces ‘negative’ results. That is, the researcher fails to find anything. I believe this is a common problem in other disciplines as well."


(Just my 2 cents, your mileage might vary)

I think that in the hard sciences, things are very cut and dry - or at least that is the goal of a study. For example, in physics, the goal of experiments is to prove with a very high certainty that something is true or false.

Humans on the other hand are unpredictable. So if you run an experiment that says X, it may or may not replicate later, depending on hidden variables and assumptions.

Consider the famous marshmallow test. The latest studies suggest that it is not willpower but actually affluence that is the bigger determinant factor. [1] So that means that in future studies, they probably need to consider this variable and design the experiment in a way that they can control for it.

What is interesting is that for all the differences and intervening variables, humans can be studied and exhibit very predictable patterns. A good example of that is the study of power. [2] The Prince was written in 1532 and its principle continue to be just as valid today!

[1] https://www.theatlantic.com/family/archive/2018/06/marshmall...

[2] https://en.wikipedia.org/wiki/Machiavellianism


Humans on the other hand are unpredictable. So if you run an experiment that says X, it may or may not replicate later

humans can be studied and exhibit very predictable patterns.

Seems somewhat contradictory. If humans can be studied and exhibit predictable patterns, why wouldn't we expect experiments to be repeatable, as the parent comment asked?

And if the experiments are highly random, then either you should be conducting more of them over and over to get a valid statistical sample, or you shouldn't be conducting them at all. Either way I see no valid argument that studies in social sciences shouldn't be repeatable.


This is more of an exploration than an explanation but it seems like you're pointing to the threshold of success in a series of experiments.

As deyan mentioned in physics or chemistry, we get a high level of certainty after isolating all the variables. When it doesn't go according to plan, it's a known or unknown variable to blame.

It seems to me that behavioral psychology is still in its infancy in terms of identifying those variables and/or the threshold of Truth is much lower than sciences like chemistry.

Lots of patterns exist that have lower thresholds. Sports analogies are fairly illustrative. Hitters in baseball are considered great if they succeed 3 times out of 10. Then again, they run 500+ experiments a year (for hopefully many years) to determine their average...

I agree with your conclusion that experiments needed to be repeated more over time. I simply wonder what type of success threshold we will look at as the Truth in time.


The reason I asked is because strictly speaking, an unpredictable thing cannot be studied empirically. We can sacrifice some precision or certainty in science, but we can't get rid of reproducibility and still call it "science" just by prefixing it with the term "soft."

For example, sound economic models are generally observable in the aggregate despite being imprecise, and high energy physics has many competing theories which are demonstrable but incomplete. The game theoretic principles of market analysis are reliable, as are the principles of gravity. Markets are generally efficient, and the model's conclusions have clear utility that matches real world conditions, even though small pockets of inefficiency also exist. It's fuzzy, but not unscientific. Forests are green, but some trees don't have green leaves.

In the abstract, we can tolerate some fuzziness or imprecision as a margin of error, but only if it's compartmentalized to some incomplete theories, and only as long as it's grounded and consistent. We cannot tolerate something being true one day and false the next. Green forests cannot inexplicably and inconsistently become orange without threatening our claim that forests are green.

I don't really have an opinion on psychology in particular, though it's pretty clear there's a reproducibility crisis. But as a direct response to your thesis: arguing that a "different mindset" is required to scientifically study subjects which are unpredictable is an untenable position. If humans actually are fundamentally unpredictable - whether due to intrinsic non-determinism or a present lack of sufficient data - they cannot be empirically studied. At that point we're no longer compartmentalizing incompleteness or fuzziness in otherwise sound models. Instead, we're compartmentalizing otherwise sound observations in a sea of chaos.

Faced with this sort of reality (and I take no position on whether it is the reality), any scientist, in a "hard" or "soft" discipline, would have to examine if they can reasonably acquire enough information related to the thing in question to make any determination in good faith. An unpredictable thing is an unknowable thing; you may as well try to resolve the three body problem.


A blank slate-ist interpretation is normalising by parental wealth.

An alternative interpretation is removing the genetically inherited delayed gratification component that pays off financially in modern life.


The outcome was not entirely unexpected. The article mentions that this is what Zimbardo originally intended. But I think even more important in this story is the great story-like quality of the result. It's a great story, and it "makes you think", as they say. One professor in this text is quoted actually saying that the story is bigger, meaning presumably more important, than the science. I remember similar reactions when some of Gladwell's shenanigans was uncovered. It just doesn't matter if it's true, if it makes a nice story. In fact, it's probably even better if it's not true.


Reminds me of religious parables. No-one objects to Jesus's parables on the basis of their being factually untrue--they're obviously fiction and that's not a mark against them. On the other hand, the SPE is marketed as empirical science, not spiritual revelation!


Yes. And the fact that, being taken as serious science, it actually affected penal policy, makes it particularly ugly.


Is there a definitive writeup of what the fuck is wrong with Gladwell('s writing)?


I stumpled upon a good text here once, but I don't remember where it was published, so I guess you should best google it, sorry.


> I do not understand why a single experiment is considered good enough for research results in the social sciences.

It isn't. This is a problem of presentation to nonexperts. Talk to grad students in psych and they will have an accurate understanding of the SPE and its place in the literature.


There were a whole bunch of ‘studies’ done after World War Two that purported to show how it was possible that ‘good people’ could be induced to do bad things.

I think the ones that gained wide currency were the ones that most excused truly horrific and servile behavior by individuals.

I don’t believe for a second that someone who would torture or abuse or kill someone on the orders of someone else was ever a ‘good person’.

Don’t tell me that someone who is talking someone’s child from them right now is a good person. I quit a job just recently working for dhs just because I didn’t want to participate in that kind of behavior even tangentially.


> I don’t believe for a second that someone who would torture or abuse or kill someone on the orders of someone else was ever a ‘good person’

That's easy to say. But context matters a lot!

What about this: your family is held hostage, or you live in a dictature where disobedience means at best prison, and often retaliation against your family. Real example: I've met a guy -a political refugee- at work whose daughter was ABDUCTED by the police of his country to put pressure on him! Can you imagine his anguish? Now, you are ordered to go in this room and "extract a confession" from someone or to kill them. What would you do? (Knowing that there are NO CHANCES that you can escape/run without awful consequences at least on your family, it's not a movie).

Same for going to war. Many times in history have the poorest been forced to go to war under pain of death (and possible retaliation on their family). What would you do in such a situation? Especially when you are poor and without "connections".

Truth is that it's very easy to force people to do awful things by the use of force/coercion. Very few will want to be martyrs or put their family at risk.


Do note that historically, in most cases of torture the torturers weren't themselves under threat, and their families weren't either. This hypothetical doesn't happen in practice.

I tend to support the parent commenter's opinion that torturers were never good people to begin with. I can understand murderers, but not torturers, and especially not torturers of people who haven't directly wronged them (e.g. when it's an "interrogation").

Extreme examples such as "but what if your daughter was raped and murdered by your prisoner" tend NOT to be the case of real life scenarios where torture actually happens.


> I don’t believe for a second that someone who would torture or abuse or kill someone on the orders of someone else was ever a ‘good person’.

Fair enough. How about "not a monster"?


One interesting thing about Milgram's experiment was that it was actually intended to detect a difference between the German and American psyche. This is a case of science working to falsify an incorrect belief.

The argument being that there was something wrong in the German character that made it easier for someone like Hitler to get "good people" to do bad things. Experiments were run in the US as a test and control, and the results were well-established long before Zimbardo's ridiculous non-experiment.

We only need to look at the behavior of ICE agents to see real world confirmation of Milgram's experiments.


The narrative of the original experiment may be a "lie", but in what way is a Medium article the "truth"? It would be very important to educate and remind ourselves about how science actually works. It does not work with simple lies and truths, it is a path we follow and where we make many missteps. Even if Zimbardo did everything wrong - experiment, documentation, publishing, narrative, conclusions, influence - it is perfectly all right, the big river of science will slowly but steadily float it in the "right" direction.


No its not alright. The idea of science is that things are reproducible and that scientists at least try to be honest about what, why and how they do their experiments. You say that scientists might as well just stop doing anything and just write fake reports. It's all the same after all!


You are right, it is not OK to deliberately lie. What I meant is it's OK from the point of view of science and science will correct these kind of "lies" in the same way it corrects erroneous measurements, documentation errors, misunderstandings, whatever. Scientific papers are actually full of these, and I dare not guess how many of them are written due to publish-or-perish (which are also some sort of lies, as the writer has not much to say, or only something with negligible importance). We could even think about lied about results as viruses which the immune system will handle.


What? There's a definite difference between "Whoops, I accidentally measured this wrong back in the day, sorry" vs "I'm covering up that I'm measured this wrong".


From a moral point of view there is. My point is that from the point of view of science the same self-correction should work in both cases, so the bigger story to "somebody lied" is that "in case of this attractive lie the self-correction didn't work well".


If he lied about the experiment and results... It's a lie. Pretty simple.


Be sure to read to the end where the author brings up his own family's tangential involvement in the Zimbardo saga.


Could we think about this experiment as a meta-experiment? Zimbardo experimenting on the research community and the public how long we keep something like alive just because it is an attractive story? (I am not defending him!)


This is a bananas article and kind of makes a case for Zimbardo as a sort of Alex Wakefield of psychology.


Who is Alex Wakefield?


Person who published the original (bad) study that set off the anti-vaxers. He just promoted him to an Alex from an Andrew.


Ugh. Sorry!


I'd just like to reemphasize what conclusions you should and shouldn't draw from an experiment being invalidated. If an experiment is run that says the sky is blue, then its invalidation does not mean that the sky is not blue. It means we no longer have evidence for that conclusion. The correct answer then isn't the negation of the original study's conclusion. The correct conclusion is that we don't know or aren't sure.

Likewise, if the SPE is invalid, it doesn't imply that average people aren't capable of horrific behavior - it simply means we have less evidence to determine that than we thought we did.


>Likewise, if the SPE is invalid, it doesn't imply that average people aren't capable of horrific behavior - it simply means we have less evidence to determine that than we thought we did.

Agreed; SPE just doesn't constitute evidence for Zimbardo's particular narrative.

>According to Alex Haslam and Stephen Reicher, psychologists who co-directed an attempted replication of the Stanford prison experiment in Great Britain in 2001, a critical factor in making people commit atrocities is a leader assuring them that they are acting in the service of a higher moral cause with which they identify — for instance, scientific progress or prison reform.

This quote, and everything actually observed during the SPE, is consistent with Milgram's work, which has been replicated a zillion times.


> This quote, and everything actually observed during the SPE, is consistent with Milgram's work, which has been replicated a zillion times.

I think you may be cherry-picking quotes here, here is another from much later in the article:

> In another blow to the experiment’s scientific credibility, Haslam and Reicher’s attempted replication, in which guards received no coaching and prisoners were free to quit at any time, failed to reproduce Zimbardo’s findings. Far from breaking down under escalating abuse, prisoners banded together and won extra privileges from guards, who became increasingly passive and cowed. According to Reicher, Zimbardo did not take it well when they attempted to publish their findings in the British Journal of Social Psychology.


Milgram's work is on the power of authority, so your quote showing that behavior did not follow the desired results without the professor present is also consistent with Milgram.


> At the end of each of the Bridgeport experiments [Condition 23], before he revealed the hoax, Williams asked each man what he thought the Research Associates of Bridgeport was and why he had volunteered. Milgram had presumably told Williams to ask this to demonstrate that subjects had no idea that Yale was behind the research. Most said they didn't know, but many assumed that it was an organization conducting scientific research. Williams probed this issue with several subjects, and it was obvious that even though many didn't have the slightest clue about the organization and what it ostensibly did, they were still prepared to trust it because of the scientific imprimatur. One man who had gone to 450 volts said that he was motivated by his six-year-old daughter.

> Man: I can only say that I was- look, I'm willing to do anything that's ah, to help humanity, let's put it that way.

> Williams: Right, that's what we're doing.

> Man: I've got- I've got a child that's a cerebral palsy child.

> Williams: Have you really?

> Man: And you know they're experimenting steadily on trying to find a cure for it. It's a sad thing.

From his correspondence we know these experiments were very personal to Milgram, motivated in no small part by his desire to understand the Holocaust. Yet these experiments were not testing obedience to generic authority, it wasn't testing obedience to military officers or gestapo thugs. The above man was compliant because he thought he was helping to advance medical science. The whole thing is farcical.


Military officers and gestapo thugs were compliant because they thought they are helping their race in struggle for survival in competition against hostile ennemy race. They also believe that it is the highest calling and expectation on manly man. Some of them thought also to advance medical science.

Gestapo were top police and believed that they are enforcing the law as they are supposed to.


>"Military officers and gestapo thugs were compliant because they thought they are helping their race in struggle for survival in competition against hostile ennemy race."

Yes, this seems likely. Which flies completely in the face with the tradition narrative of the Milgram Experiments: that people follow the orders of authority figures blindly.

What the Milgram experiments actually showed is that people are willing to do something they normally wouldn't if they believe in the cause. The teachers in the Milgram experiments believed in science and were willing to step outside their comfort zone to advance science. The Gestapo believed in their cause too, and were similarly willing to do things a man otherwise wouldn't.

In both of these cases they weren't simply doing what they were told, they were not blindly following orders as Adolf Eichmann had unpersuasively pleaded.


Adolf Eichmann was a bit too high and had bit too successful career for someone who "just followed orders". You did not even had to join SS, much less take leadership position.

Definitely agree on that one.


First, the results have largely been replicated by other scientists, though the ethics limits the amount of replication able to be done.

Second, presumably the people in the control group without any apparent authority figure present would also have assumed it was a scientific experiment. So I'm not seeing how you think this ruins the experiment.

Finally, one of the Nazi justifications for the Holocaust was an attempt to advance medical science. So this test would still fit his motivation, though it does muddy any conclusions you can draw.


What control group? Condition 23 was meant to isolate the role the influence of Yale played, yet it was still a setting that the participants considered scientific as evidenced by that quotation of one of the Condition 23 participants.

I don't doubt that if you replicate the experiment you'll replicate the results. But the experiment isn't proving what it's said to prove. On the contrary, much of the experimental results contradict the mainstream narrative told about the experiments: http://journals.plos.org/plosbiology/article?id=10.1371/jour...


Well, the message people got was that students who were knowingly participating in a college psych study became spontaneously sadistic within just a day or two.

That feels intuitively ridiculous, and has no evidence I know of to support it, except this study.


It's not unheard of though. Just likely not ad hoc repeatable, reproducible.

That said, these group experiments are very hard to control (as in control for confounding variables/factors), and very easy to screw up methodologically (maybe you don't pre-register one interesting hypothesis). Very hard to design properly (forget to control lighting), and even harder to run it well (the building where you want to do it doesn't have adjustable lighting, or you want cold water showers only, but it's not that simple to achieve, so you don't and just mention it in the paper - was that the cause of a negative/positive result or not?).


The idea behind the study was to try to understand how an entire nation - Germany - could go berserk with hatred and sadism. Scientific or not, the burden of proof lies with those who dispute Zimbardo’s (and Milgram’s) findings, because they have been borne out so many times in so many different contexts. Their work has never been more relevant than it is now, as we witness the rise of Trumpism.

What alternative explanations would you propose?


1) Milgram's and Zimbardo's findings are very different. Milgram's have been backed up by many replications and are often mis-understood.

2) We have both experimental and factual evidence that shows that Zimbardo's findings were lies.

3) The entire nation of Germany did not go berserk with hatred and sadism.

4) Name one instance where Zimbardo's findings have been born out is "so many different contexts"?

The rise and success of Nazism (like most social movements, including 'Trumpism') was the result of a confluence of a number of factors. To pretend that it was just "how people are" is do both people and the truth a disservice.


> "Milgram's have been backed up by many replications"

From Behind the Shock Machine by Gina Perry:

>On the recording, a much younger Hannah sounded perky and confident as she talked to Williams at the beginning of the experiment. But you could hear her confidence getting shakier once the learner started to give incorrect answers, and it became clear that his memory was not reliable. By the third shock, 45 volts, she was stumbling over the words. At the fifth shock, 75 volts, when the learner made his first sound of discomfort, there was a pause. Then I heard the following exchange.

>Hannah: [to Williams] Is he all right? [into the microphone] The correct one was "hair". [to Williams] Is he all right?

>Williams: Yes, please continue.

>Hannah: All right.

To put it lightly, the Milgram experiments were trash. It was measuring trust in scientists not compliance to authorities. What undergrads are taught about Milgram is that the experimenter would ask the teacher to "Please continue" but what actually happened here is the experimenter assured the teacher (Hannah) that the learner was all right. The experimenters played fast and loose with the procedure, saying whatever they thought was necessary to get the teacher to continue. And sometimes it wasn't even measuring trust in scientists; there is at least one learner who recalls the experimenter, in response to their protests, asking if they'd like to swap places with the learner (essentially threatening the teacher to get them to comply.) The experimenters and particularly Milgram knew what results they wanted before the experimentation even began, and made sure they got those results.


It's been a while, but my understanding was that the results of the Milgram studies have been fairly widely replicated with consistent results.

Now, what those results mean / what is being measured by them is another issue.


I had not heard Behind the Shock Machine, thank you.


Name one instance where Zimbardo's findings have been born out is "so many different contexts"?

Probably the best examples would be found in the in-group/out-group dynamics in religious cults and closed-state regimes. The Nazis exploited these principles in the 1930s to impressive effect. But what happened in Zimbardo's experiment is more closely analogous to what you see in places like North Korea today, along with abuses carried out in insular groups ranging from the Branch Davidians to the Mormons. The same hierarchies form and the same roles are played.

It is foolish in the extreme to dismiss Zimbardo's work because of a few ethical hangups. What he did is highly reproducible, but only outside the bounds of regulated academia.

Milgram has indeed come under similar criticism from people who are desperate to find another explanation -- any other explanation -- to distract themselves from the truth about what human beings actually are.


Did you seriously just compare Mormons to Branch Davidians? And North Korea? Mormonism has its problems (excellent article on that at https://longreads.com/2018/06/07/meet-the-new-mormons) but that is totally over the top. If you want to leave North Korea, you risk your life and your family's lives in a terrifying journey over the Chinese border. If you want to leave Mormonism, you write a letter saying "please remove me from your list."


I agree with you that GP's comparison was a bit drastic, to say the least.

However, you have gone too far in the other direction. You make it sound like leaving a religion is just as easy as unsubscribing from a newsletter.

Visit one of the ex-religion subreddits sometime. I know Mormonism and Christianity both have one. To be fair, those online groups will be selection biased towards the most difficult situations. But I think many of those experiences are not terribly uncommon.


Are we forgetting about the actual founder of the faith.

Either he was a crazy old coot and we can safely say that present company has moved past his behavior in which case its hard to credit a religion solely based on his revelations or he was a true prophet and we ought to analyze their nature of the organization based on his words and deeds rather than just present lukewarm implementations.

Look up blood atonement whereby killing sinners presumably including heretics was the proscribed solution.


They are taught that this results in infinite torture from birth.


At least in that regard, mormons are not so different from regular christians.


Many Christians do not do that, FYI. I know many still do, and I can't speak for all of them.


I would wager the majority of Christians believe in hell and teach that belief. Perhaps not in San Francisco... but across the country as a whole.


> Probably the best examples would be found in the in-group/out-group dynamics in religious cults and closed-state regimes. The Nazis exploited these principles in the 1930s to impressive effect. But what happened in Zimbardo's experiment is more closely analogous to what you see in places like North Korea today, along with abuses carried out in insular groups ranging from the Branch Davidians to the Mormons. The same hierarchies form and the same roles are played.

You means groups that use heavy levels of indoctrination to induce this behavior? hat is the exact opposite of Zimbardo's claim that all you have to do is randomly assign someone a role and they will exhibit this behavior without prompting.

> It is foolish in the extreme to dismiss Zimbardo's work because of a few ethical hangups.

His work is not dismissed because of "ethical hangups" about how he treated experimental subjects, but because he has repeatedly lied about and misrepresented the experimental protocol he followed. Lying about experimental protocols are some of the BEST reasons to dismiss a scientists work.

> What he did is highly reproducible, but only outside the bounds of regulated academia.

Except the study he claims to have run has been reproduced and did not have similar results.

> What he did is highly reproducible, but only outside the bounds of regulated academia.

We have evidence that he didn't actually do what he claimed to do.

We also have evidence that if you do what he claimed to do, you get different results.

> Milgram has indeed come under similar criticism from people who are desperate to find another explanation

Miligram did excellent science. Unlike Zimbardo, he did careful follow studies to try to understand and elucidate the results he found and was generally careful about characterizing the conclusions that could be drawn from his work. Milgram's studies and the conclusions he drew from them are widely misrepresented in popular culture by people eager to provide an excuse we can use to absolve ourselves of responsibility for "just doing our jobs".


Except we do have evidence that the specific claims made by Zimbardo were fraudulent and we also have experimental evidence that shows that these results are not accurate if you actually implement his claimed methodology.

SPE now needs to be taken as a cautionary tale about ethics and science, nothing more.


For the very interested but so far unlucky: where are these specific claims and the documentation of their fraudulent nature?

Thanks in advanace!


The article goes over several. Off the top of my head: the supposed mental breakdown of Korpi (he was faking it), Zimbardo claiming he wasn't politically motivated, Zimbardo claiming the students were free to leave at any time, Zimbardo lying or misleading as to what degree the 'guards' had been coached (specifically with regard to the role of Jaffe.) Details about each of these are available in the article.


The study wasn't just invalidated, it was misleadingly characterized by the people who ran it and promoted its findings.

This does more than just withdraw some evidence in favor of the idea that ordinary people can easily commit horrific behavior. It provides new, positive evidence that people have an incentive to spread this idea fraudulently. This should properly cause us to re-examine all evidence for this thesis very closely.


Anyone who was accepted to Stanford in the early 1970s was, by definition, not an average person. That's the whole point of being accepted at Stanford.

I'm not saying it means Ivy League students are more likely to be psychopaths, but I am saying it's not scientifically valid to take a study of a population from a hyper-elite American college during a draft war and apply the results of that study to the average person, especially if the study is supposed to remark on factors regarding social pressure and social expectations.

"We knew [the guards] couldn’t hurt us, they couldn’t hit us. They were white college kids just like us, so it was a very safe situation. "


College Students Are WIERD. As the paper put it.


I think you have presented a fallacy:

> If an experiment is run that says the sky is blue, then its invalidation does not mean that the sky is not blue.

If an experiment's hypothesis is that the sky is blue and the sky is indeed blue, then there is nothing wrong and there is nothing to be invalidated.

If an experiment's hypothesis is that the sky is blue because there is a bunch of leprechauns throwing blue Skittles at it, and there are evidence that this not the case; then the experiment is to be invalidated and, in your own words, "[i]t means we no longer have evidence for that conclusion."

If you don't have the evidence to support your hypothesis, then you can't draw any conclusions except for, and at best, that there is no evidence to support your hypothesis. To put it into the context of the running example, this translates to the sky is blue but it's not because leprechauns are throwing blue Skittles at it. You can't say just because my hypothesis about leprechauns is invalidated, it doesn't mean that there are not leprechauns throwing skittles at it—because that's simply just what you believe and wish to be the case; and a good scientist should be impartial to beliefs.

More importantly, the whole point of the article is beyond invalidating the SPE. It's, in my opinion, a great piece about much deeper problems in scientific or, well, "scientific" methods.


It still could be leprechauns, though. A null hypothesis is not the same as the inverse of the H1.


The premise of the experiment isn't just that people are capable of horrific behavior. It's that the thing that brings this about is largely circumstantial and a function of our role in society.


A problem with that is that people will remain tainted by the urban legend type persistence of the conclusions. So if one could void this study that’d be a better outcome.


Is it possible that Zimbardo is now legally responsible? He influenced policy in this way, with what seems to be obvious malicious intent, and he also apparently testified in many court cases. Can he be prosecuted for this?


> "He influenced policy in this way, with what seems to be obvious malicious intent"

Actually he probably thought he was doing a good thing. He didn't intend to harm people, but to help people (by creating a fraudulent narrative that would work towards the greater goal of prison reform), with enough fame and personal profit to sweeten the deal. It seems likely to me that Zimbardo believes the ends justify the means.

Still, it would be satisfying to see him prosecuted for this fraud.


I may have overstated it, but it seems he's been intentionally lying, including in court... Just that should suffice, regaddless of his reasons.


The SPE is kind of like the milgram experiment in the way that people tend to do things they would otherwise not have done when given the green light from an authority.

It reminds me of a short story illustrating this beautifuly.

(From The Way of Kings) http://www.kurt-anderson.com/main/2036/media/reading/derethi...


> According to Alex Haslam and Stephen Reicher, psychologists who co-directed an attempted replication of the Stanford prison experiment in Great Britain in 2001, a critical factor in making people commit atrocities is a leader assuring them that they are acting in the service of a higher moral cause with which they identify

The drug war in a nutshell ladies and gentlemen.


> But Eshelman, who had studied acting throughout high school and college, has always admitted that his accent was just as fake as Korpi’s breakdown. His overarching goal, as he told me in an interview, was simply to help the experiment succeed.

There are countless other quotes of guards and prisoners saying they were "just acting". But isn't that equally as powerful? I'm not sure it matters so much why you're locking someone in a closet for 6 hours (just acting, wanting the experiment to succeed, wanting to please your teacher, having sadistic tendencies, etc.), it's that you followed through and performed that action. The fact that you did something you wouldn't normally do might be even more significant. It's one thing if the guards and prisoners all know they're acting, but that doesn't appear to be the case. It sounds as if individuals were acting on their own accord, but unaware what behaviors and instructions were real vs. not real.

Even in real life, how many people in the hierarchy of a dictatorship are "just trying to please" the dictator vs. really "buying in" to the philosophy? I guess it matters in terms of trying to change the system in the future, but it doesn't matter that much in evaluating the harm done to the powerless in the past.

I don't think power always veers towards dangerous dominant/submissive behavior, but history seems to have an amply supply of evidence that that certainly can be the case, regardless of why it gets there.


>There are countless other quotes of guards and prisoners saying they were "just acting".

It seems entirely possible that "just acting" was an after the fact justification. People are very unreliable witnesses of their internal processes, this is probably even more true when someone has reason to feel guilt for their actions.

>I'm not sure it matters so much why you're locking someone in a closet for 6 hours (just acting, wanting the experiment to succeed, wanting to please your teacher, having sadistic tendencies, etc.)

Fake it until you make it, torturer edition.


Yes, exactly. Imagine trying to explain to people that you participated in something where you did horrible things for structural reasons. To really pull that off, you need to convince every interlocutor that they too would be prone to this. When it's much easier for your audience to believe that it's just you that's horrible, and they are fine and decent people who wouldn't do as you did.

Especially decades later, I'd be very reluctant to naively credit people's explanations here.


Except we don't have to, we have experimental evidence that shows what happens and what happens is not what 'happened' in the SPE.


We don't have to what?

I could believe that the SPE doesn't really get at it, but people do end up doing horrible things for structural reasons. That doesn't change the incentives for trying to explain that to people who weren't there.


What's the p-value on these? How many trials were there?

We have separate events, and we try to compare them, and use them as "evidence".

I don't think the SPE is great science, but people are very easily driven by circumstances, and it'll take a lot of time to fully unpack the dynamics and ethical consequences of that.


It's not the same because the conclusions are about humans becoming sadistic just because they put on a uniform. The whole pitch of the experiment was their behavior was spontaneous, unsolicited, and caused by the setting. I.e., not caused by the experimenters asking the participants to behave a certain way.

The Milgram effect (superiors asking you to cause pain because science + authority) has been replicated many times.

This is all explained quite well in the article.


I'm under the impression that the Milgram experiments are also not quite what people have generally been lead to believe. I've not read the book yet (I just got a copy though)

>"Over 700 people took part in the experiments. When the news of the experiment was first reported, and the shocking statistic that 65 percent of people went to maximum voltage on the shock machine was reported, very few people, I think, realized then and even realize today that that statistic applied to 26 of 40 people. Of those other 700-odd people, obedience rates varied enormously. In fact, there were variations of the experiment where no one obeyed."

-Gina Perry, author of Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments. https://www.npr.org/2013/08/28/209559002/taking-a-closer-loo... (I've not read the book yet, but I just got a copy.)


The milgram experiments are far more interesting, and much better science that the SPE, but the popular understanding (and teaching) of them misses some of the most interesting conclusions that can be drawn from them.

It's been a while since I studied this, but the most interesting part of the Milgram experiments are not that this behavior can be induced but the follow up studies which studied exactly which elements are most important to this inducement.


>It's been a while since I studied this, but the most interesting part of the Milgram experiments are not that this behavior can be induced but the follow up studies which studied exactly which elements are most important to this inducement.

Well, to many people that hear about it for the first time, the mere fact that this behavior can be induced is shocking. It goes against the narrative that there are "good" and "bad" people.


The thing I find frustrating with Milgram is there is three narratives going on simultaneously.

1. The actual reality: this is an ethical experiment and no one is at any risk 2. The pretend actual reality: someone is being (potentially) fatally injured - certainly being hurt beyond the point of toleration 3. The false reality: - everything is fine because of science

Almost everyone is concerned that they may be in reality 2 - they ask the researchers who say some variation of "please continue". This leads them to the entirely correct conclusion that this is a safe environment and they can continue.

However, they are meant to have concluded that they really are in reality 2 and we judge them accordingly - even though they were never in reality 2, and never did anything horrifying to anyone.

But why should they consider the other person their responsibility - surely the other subject can just say "that's enough I don't care about science enough to carry on"? If the torture subject can't freely decide to stop then that means the "torturer" is probably also in mortal danger!


>But why should they consider the other person their responsibility

Because that's what we hope to expect from people in a society: to not be bystanders when others are harmed, and not be accessory to such harm.

>- surely the other subject can just say "that's enough I don't care about science enough to carry on"?

Maybe yes, maybe no. Maybe the researchers are blackmailing the other subject or holding them against their will.

What you know is that they seem to be in great pain.

It boils down to this: do you believe your eyes and common sense, or what the authority tells you?

The experiment shows that many opt for the latter - and that is scary.

>If the torture subject can't freely decide to stop then that means the "torturer" is probably also in mortal danger!

History has shown us that this is not the conclusion the people make.

Remwmber "First they came for _____, but I didn't say anything because I was not a ____"?


It's unlikely that the "teacher" subject believed #1, because if they did, then the whole experiment makes no sense, unless the "teacher" felt pressure to conform. That is still a similar conclusion, AND in the absence of evidence that the torture was fake, is strongly suggests that people are willing to ignore real suffering, if someone in authority denies the suffering -- and that is seen throughout politics, religion, and business...


> >1. The actual reality: this is an ethical experiment and no one is at any risk

>It's unlikely that the "teacher" subject believed #1

There is ample evidence that many of the teachers believed the experiment to be a sham and understood that nobody was actually being harmed. Some of them noticed inconsistencies in the experimental procedure, such as the learner not being paid, flaws in the narrative such as the learner with supposed heart problems being far too willing to be subjected to shocks for beer money, or general incredulity at the entire circumstance including unwillingness to believe Yale would permit such obviously unethical experimentation.

>is strongly suggests that people are willing to ignore real suffering, if someone in authority denies the suffering

It only shows that if the learner is persuaded it's being done in the name of a cause the learner already believes in.

>and that is seen throughout politics, religion, and business...

People who view all of these things through the lens of traditional narrative of the Milgram experiments are doubtlessly discounting the role true belief in the cause plays. Adolf Eichmann claimed he was just dispassionately following orders, but do you really believe that? You shouldn't. The traditional narrative of the Milgram experiments would have you believe just following orders is normal human psychology, but actually the experiments did not show that.


>People who view all of these things through the lens of traditional narrative of the Milgram experiments are doubtlessly discounting the role true belief in the cause plays

Do they? I agree with your view on the takeaway from the Milgram experiments - that conviction in cause is the necessary factor in addition to the nudging from the authorities.

Does it makes it any better?

Replace (science + man in a lab coat) with (religion + man in a robe) or (political ideology + man wearing the right insignia) or ("we are saving the world" BS + manager in a $80 T-shirt), and you'll get the same results as long as the cause is believable.

If you are the one being hit with a stick for nothing, does it really matter to you that the one who did it truly believed that it was for some higher cause - without pausing to think how perhaps this is not the best way to get there?


> "Do they? I agree with your view on the takeaway from the Milgram experiments - that conviction in cause is the necessary factor in addition to the nudging from the authorities. Does it makes it any better?"

A Nazi who is a true believer is emphatically not better than a German who was merely following orders. I think most would say he's actually worse.

However an observer who recognizes the of role belief is better than an observer who discounts belief and views the situation only as Milgramian obedience to authority. The former observer is better than the later because he has a more accurate understanding of the nature of the problem, and is therefore better equipped to deal with it.

I'll also say this: I personally believe dedication to a cause is amoral. If it's the right cause, dedication to it is generally moral. If it's the wrong cause, dedication to it is immoral. It's not dedication that made true believer Nazis evil, but rather the cause to which they had dedication. Many of the men who died fighting the Nazis were similarly dedicated to their causes. Furthermore, specifically relating to the Milgram experiments, people who are dedicated to a good cause can be mislead into behaving in an immoral way. If we didn't recognize that, we might be left to conclude that apathy is virtuous.


What those shocked people (Ha) are often not told is that Milgram went to great lengths to appeal to people's altruism (desire to progress the state of science) when selecting subjects and to persuade them of the scientific validity of what they'd be asked to do. That once the experiments began the experimenter would frequently pressure the 'teacher' by invoking the necessity of science.

He was measuring "obedience to authority" insofar as "authority" meant trust in scientists credentialed by Yale. One variation on the experiment following the same script as the infamous variation tested obedience rates outside of the Yale campus and, of course, there was significantly less compliance.

Or put another way

>'according to academic Don Mixon, Milgram didn't measure immoral behavior in his lab. On the contrary, he argued that what Milgram measured was misplaced trust. [...] Don found the same results as Milgram but came to completely different conclusions. he argued that it wasn't immorality that drove Milgram's subjects to flip the switches but trust in the experimenter, who, despite the cries from the learner, calmly told them to continue and gave the impression that there was nothing to worry about. [...] According to Don, Milgram simply measured the faith that people put in experts: "He found just the opposite of what he thought he found; nothing about the subjects' behavior is evil. In fact, people go to great lengths, will suffer great distress, to be good. People got caught up in trying to be good and trusting the expert. [...] The only evil in the obedience research, Don came to believe, was "the unconscious evil of the experimenters"'


Potato, potato in my opinion.

Even more so, Don's perspective is even more shocking. The people doing something that (to their best knowledge) hurts the other person and hurts them (less so, but still) because their authority person told them its for the greater good?

I'd take a sadist that thinks for themselves instead.


Milgram didn't persuade these people that science is good. He persuaded them that what they'd be asked to do was for science.

In either case, it's far off from the "people follow orders" narrative. When teachers in the Milgram experiment were confronted with direct orders their compliance rates plummeted. They were compliant when they believed they were cooperating Milgram, and uncompliant when they believed they were being ordered by Milgram. During their periods of compliance they identified with Milgram by sharing the common goal of advancing science. The forth prod, the direct order ("You have no other choice, you must go on") caused any perception of a shared identity between the teacher and Milgram to evaporate and with it, their compliance.

http://journals.plos.org/plosbiology/article?id=10.1371/jour...


>Milgram didn't persuade these people that science is good. He persuaded them that what they'd be asked to do was for science.

..which is something a volunteer for a science test sees as a good thing.

>In either case, it's far off from the "people follow orders" narrative.

Indeed. I always took it along the lines of "when put in a setting where it's acceptable to harm others consequence-free, people need little to do so".

This doesn't seem to contradict what you are saying.


> It goes against the narrative that there are "good" and "bad" people.

Actually, it doesn't really. The experiment says far more about the importance of perceptions of authority than it does about "good" or "bad" people.


> Actually, it doesn't really.

I think you misread. The results of the Milgram experiment (that authority is more important) are what go against the common narrative that there are "good" and "bad" people. You're agreeing with him.


> the conclusions are about humans becoming sadistic just because they put on a uniform

The experiment was not about how clothing affects behavior but was about presenting a group with power and encouraging them to use it. The uniforms were important in establishing group identity, but not the sole or even primary focus of the experiment.


Please don't create arguments by taking clearly figurative language literally. There's plenty to fight about without that.


From OC: > I.e., not caused by the experimenters asking the participants to behave a certain way

I think the point is, OC is presenting the experiment as something that comes about as a result of the setting (including uniform). But, I think from the researcher's and participants perspective, the instructions played a significant role.

From article: > For Korpi, the most frightening thing about the experiment was being told that, regardless of his desire to quit, he truly did not have the power to leave.

> Another prisoner, Richard Yacco, recalled being stunned on the experiment’s second day after asking a staff-member how to quit and learning that he couldn’t. A third prisoner, Clay Ramsay, was so dismayed on discovering that he was trapped that he started a hunger strike.

> a taped conversation between Zimbardo and his staff on day three of the simulation: “An interesting thing was that the guys who came in yesterday, the two guys who came in and said they wanted to leave, and I said no,”

So, yeah, the instructions were a huge part of the experiment. People wanted to leave and couldn't. The action/reaction between the participants ("I want to leave"/"No.", complain/6 hours in solitary) probably made the scenario much more real than the hallway posing as a jail or the fake uniforms.

So, figuratively or literally I don't think it was about what happens when you put on a uniform. I think it's more about what happens when someone tells you to follow a set of rules that create an empowered/disempowered dynamic.


When I was in middle school in the late 80s, our social studies teachers coordinated a mock segregation experiment. 1 out of every 8 students was identified as an "Other". We were told that the "Others" were less deserving of respect and that it would be okay to make fun of them. The "Others" would always be last in line, would be required to use "Others"-only restrooms, etc. Teachers were told to favor the regular students and not the "Others." There were no prisoners and guards. The distinction was those with privilege and those without.

The experiment was planned to last three days. It was stopped after less than a day and a half. I remember the principal coming on over the PA and announcing that the experiment was ending immediately and that treating any "Others" any differently from then on would result in a suspension. There were students showing up in the office crying about it. Not just complaining. Literally in tears.

The problem, primarily, was that it was a middle school. There's already a group of students that all the other students mistreat or pick on or don't respect as well. There's already a social pecking order. When some of those low social rank students were assigned to the "Others" group, they really got bullied by people. What I remember being the most shocking was how several teachers were bullying and treating students badly. Yelling at them or punishing them for no reason at all. Just because they had a paper sign that said "Other" on their shirt.

The thing that this drilled into me is how vile people can be to each other when they think they deserve it or otherwise aren't deserving of basic respect or equality. The older I get the more I look back on this little experiment and am shocked by what happened. Xenophobia and sectarian divisiveness is a remarkably easy way to dehumanize and strip other people of basic human rights. It's really quite terrifying how easy it was and how quickly it happened. How all the students and staff just readily accepted the new social order because that's what the authority said was true. How people abused that social order for no good reason. How people who were ostensibly pretending were actually acting in horrific ways. It's difficult to know in the moment what's acting and what isn't when it's a stranger doing it.


It’s a consistent phenomenon with people. As soon as someone finds out that it’s socially acceptable to behave badly towards some group, many otherwise nice people have no problem being absolutely terrible towards complete strangers.

A relatively recent example is the “it’s OK to punch nazis” meme, but there are many others. (If you haven’t heard about this, it’s the idea that it’s perfectly good and admirable to walk up to people and physically assault them merely because of their stated horrific political opinions.)

It’s difficult to even talk about this, because the backlash is so strong. If someone is against punching nazis (for instance), are they defending nazis? Are they also then, by extension, a nazi, and deserving to be punched? As soon as there is an actual “Other” group which is unprotected by social norms, any discussion is almost pointless.


"It's perfectly acceptable to beat up gays" is probably a better example of this troubling phenomenon than "It's perfectly acceptable to beat up people who have publicly declared their membership in a murderous conspiracy"


> when they think they deserve it or otherwise aren't deserving of basic respect or equality

... or when they are told to do so, told what bad things to do (which you even relate yourself) and given a schedule of bad actions to take, and moreover told that what they are doing was a positive good, in this case the advancement of education of all students, and just a scientific experiment. Missing these parts out is what is being discussed both in the headlined article and in other parts of this very discussion.

The acceptance here is in fact the uncritical acceptance that such experiments are valid, constructive, and not biased by forcing a particular desired outcome in order to support some agenda, or by ignoring an uncontrolled external factor that skews the results. Whereas one should have questioned what your social studies teacher was trying to demonstrate, and whether the experiment was correctly fashioned for demonstrating it in the first place.

It is time to re-visit what was drilled into you in middle school, more critically.


You're dancing around your point. Just state it.


Wow. Middle school is already a zoo; to deliberately open the cage doors and let the animals out was a crazy idea.



while the submissiveness angle is interesting, the most intriguing/important conclusion of the article is how impelled we are to see ourselves as the "good guys/girls", and how that's rationalized. time and again, the author comes back to this specific rationalization, including in reference to his bank-robbing cousin and zimbardo himself.

when you're trying to understand someone else's "bad" actions, it's important to realize that they don't see themselves as a bad person, that their rational brains will wrap memories around an inherent belief in their own "goodness". that comes from a regular failing in how we narrate our lives into simple categories of "good" and "bad".

people make mistakes constantly, and are constantly making up for them. social life is an endless chain of conflict resolution with ourselves as the protagonist.


The scary thing is that even torturers convince themselves of their "goodness". They will classify their violations as "necessary" or "warranted by the situation" and go further to assert that they have "standards". They say they don't do the things these other, "real" torturers are known for.


I think the significance of "just acting" is the knowledge that the suffering inflicted isn't real. That obviously has ramifications for the experiment: what people are willing to act out in a theatre isn't very indicative of what they might be capable of in a non-theatre setting.


I think that this needs to be understood in the context of the "replicability crisis" which is particularly critical in the field of psychology. The real question is what other bogus science is being taught to psychology students today.

Psychology has always played in multiple ponds. Expirement based social science. Explicitly unscientific^ theory (eg Freud). Quite a lot in the practice/medical aspects. A lot of philosophical approaches.

Overall, I think this has added up to a result where when people in the field say "we know X to be true," it's hard to know what they mean, nevermind if they're correct. Do they mean it in the sense that a literary critic means something, or the way a chemist means something.

For example: Atlas Shrugged & Animal Farm are both works of fiction and of political "science". This is fine in the frame of what political science is, a non scientific field. No one since Marx has really claimed otherwise.

When Marx claimed his theories to be scientific, the discussion that followed actually resulted in very important pieces of modern epistemology: what is science. The critics of Marx were also critics of Freud and the criticism was identical.

Psychology though.. it has remained in a sort of no man's land. I know that I at least have really lost confidence in psychology, as an academic field. Practice/therapy is a completely different story. I think there's been a lot of advancement in therapy. I can't help but wonder though, how much it is hindered by bogus "science."

^in the Karl Popper sense


The true experiment is how people come to willingly believe the Stanford Prison Experiment.


Um, what was the lie again?


Zimbardo was an evil man who tried to justify his behaviour by setting up a sham experiment to "prove" that everyone can be evil, given the right circumstances.


We can't escape that behavior because we are extremely encouraged to care what others think about us.

The entire western culture is about ego, and ego loves to feel part of some team or group. Standing alone is very hard for most people.


Did you read the article? It's about how the SPE was a basically one big lie.


Not the first time for Psychology, I remember my professor dismissing Freud and etc. even though we learnt about him in our textbooks.

Not saying that Freud is a fraud (or even Jung or Maslow), but their theories are hard to replicate and have been gradually phased out as a real explanation of events and furthermore their psychiatric/psychotherapy diagnoses are not effective to diagnose actual psychological illnesses, their treatment techniques on the other hand have been invaluable (https://www.mentalhelp.net/articles/mental-health-and-the-le...). Psychology is definitely going thru a replication crisis. (https://en.wikipedia.org/wiki/Replication_crisis#Psychology_...)


What Freud with few of his contemporaries got right is that most of our thinking is unconscious. If he'd stopped there and investigated, we would have a different picture of him today. But no, he developed theories on top of that theory, then shifted his stories to fit his wild speculation. Even as a historical source he's hard to take seriously.

The only recourse to avoid calling him a fraud is to call him a fanatic.


Yeah it was quite revolutionary then simply because psychology wasnt even a thing, its kind of looking back at people worshipping fire whereas nowadays we understand way more about the whole process of fire


Freudian "theory" was not science, it was a catchy narrative.


When I was assigned "Civilization and its Discontents" in college, I already knew Freud's theories were poorly supported, but it was still pretty convincing while I was reading it.


That says something about the quality of the writing, but not about the quality of the ideas.


Exactly.


Can't really blame him much then, likely only the truly psychotic would have been depicted as mentally ill back then simply because didn't know what psychology was and thus definitely skewed his samples all over the place.

Then again, I'm probably just stating another unproven assumption.


The NSA employees are wrong to spy on us! They should be held accountable!

Brb though, my manager wants me to put clicktale on our website. My burndown chart is gonna look so good this week


But the study's already been repeated all over the world in prisons, so it's reproducible.


A confounding factor with real prisons is that (hopefully) most of the prisoners have antisocial tendencies which led to their incarceration and the prisoners and guards are under credible threat of violence from both parties.

In other words, are the guards behaving like jerks because their power has corrupted them, or because behaving nicely is likelier to lead to them getting attacked by prisoners?

Another consideration is that while I'm sure abuse in prisons is rampant hopefully it's still far from the majority of guards behaving poorly. If that's true, then real life prisons might be evidence against this conclusion as most (hopefully) prison guards wouldn't be jerks.


> A confounding factor with real prisons is that (hopefully) most of the prisoners have antisocial tendencies which led to their incarceration and guards are under credible threat of violence from both parties.

I suppose we could look at jails, then. They'd mostly be full of people still presumed innocent, and most of those who are guilty aren't guilty of a violent offense. If you just wait a while, you can find out which of those presumed innocent people people were found not guilty. It'd be interesting to see how the guards treated those people.


The SPE was full of actually innocent people - you had to make a mental leap to imagine their guilt. People in jail are there because someone assumes they've done something wrong - it's (unfortunately) a mental leap to assume they are innocent, even though the law says that they are.


And what about Abu Ghraib?


What about it? The perpetrators at Abu Ghraib believed the tortured Iraqis deserving of it. In 2012 Lynndie England said, "They weren't innocent. They're trying to kill us, and you want me to apologize to them? It's like saying sorry to the enemy."

Abu Ghraib doesn't boil down to uniform and a prison setting. The perpetrators believed themselves to be in the right and Iraqis to be deserving of punishment before they were ever placed in the prison setting. It doesn't map onto the SPE at all, despite countless commentators drawing comparisons between the two when the Abu Ghraib story broke.


Why can't we? Simple. Psychologists refuse to recreate it. They refuse to prove that anything would ever be different. I really find the example of the dude 'faking' and trying to get out of the study an insanely poor choice to try to undermine its credibility. Prisoners, brace yourself for this shocker, don't want to be in prison. That makes it a more legitimate test, not less. If you could make sure every single person participating was missing out on some critical part of their life, and they would go to great lengths to try to escape, that would be an effective experiment of what prison would be like if not filled with criminals but average people under the same pressures.


SPE is a great experiment. I am certainly an outcast from most psychologists to believe that SPE should be allowed. But that’s my belief. The darknest part of a human nature can only be revealed by an experiment like SPE. If you watch the documentary on SPE, you will see the changes. I am fascinated with the minds of humans in the most direct way possible.


Did you read the article, in full? Carefully, and with a scientific mind? There are several points where the author refers to selective editing, critical facts left out, of documentaries and news reports, in order to support the false narrative. If you watch the documentary, you will see what its authors want you to see; that's not necessarily the same as the truth.


Im under the impression that this kind of modern experimentation (like Milgram) is considered unethical making future interesting scientific trials impossible. Once we finish "debunking" the poorly orchestrated experiments of the past we will finally be left with the zero knowledge apex of deconstruction that our age so highly prizes.


Rather than being "debunked", the milgram experiment has been pretty widely replicated.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: