> The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.
It seems that we fell for the narrative fallacy every time this "research" was used as an explanation for behavior in the real world.
"Many other studies, such as Soloman Asch’s famous experiment demonstrating that people will ignore the evidence of their own eyes in conforming to group judgments about line lengths, illustrate the profound effect our environments can have on us. The far more methodologically sound — but still controversial — Milgram experiment demonstrates how prone we are to obedience in certain settings. "
So, really, in that (very artificial) situation, concluding that they have lost the ability to tell which line is longer, while unlikely, was the least unlikely option. So, they chose to believe (1) over (2).
Their only mistake was in not thinking of the possibility that every other person, including the researcher, was trying to deceive them. That's not at all the conclusion that Asch (and others since) have drawn from it. I believe that study could replicate, but I don't think it means what they say it means.
It seems to me to be testing the second while being reported as the first.
Not to mention his experiment was replicated by several universities around the world, although due to modern ethical standards, it would be impossible to reproduce ethically at most universities today.
I don't think Milgram falls into the same category, especially with all the work they put in at Harvard in the controls (the learner was a tape recording; so everyone heard the exact same thing. It was the same professor in the room with him; always replicating the same dialog).
This is in direct contrast with Vsauce's trolley problem experiment, which had way too small a sample size, and who forced a response for one person by changing the conditions of the experiment (there was no one outside when he tried to get help).
Some of the teachers undoubtedly thought it was fake, others did not. And importantly a majority of the teachers were not properly debriefed until several months later (some never were). The "dehoaxing" most received immediately following the experiment was itself another hoax which in some cases was meant to persuade the teacher they had in fact been shocking somebody (but that the shocks were only very slight harmless shocks.) The dehoaxing was meant to introduce the teachers to the learner, to demonstrate that he was alive. Some did not even receive this dehoaxing, presumably because procedure was violated, which explains why a handful of them might believe they'd killed somebody.
No. No. No. That is entirely wrong. Watch the original video. Every one of the teachers is introduced to the learner after the experiment and they were told no actual harm came to them. At least in the original Milgram iteration, all of the teachers knew by the end.
 Funny anecdote- when I was still in science (brain imaging) a reporter from a major news outlet interviewed another member of my lab about what brain activity would look like in a dead person's brain. The question was entirely serious.
You too are being slightly too generous towards scientific journalism- a typical non-scientific audience wouldn't make the distinction between clinical and brain death, and the reporter likely wasn't either.
>...with all the work they put in at Harvard
I think the experiments were done at Yale.
Milgram wrote a book about the experiments and I agree, (at least as described in his book), they worked hard to test the hypothesis. (Setting up experiments outside the university, using both male and female experimenters, Having the experimenter play the student, etc.)
It's the same concept I try to illustrate by noting the NASA Challenger disaster was not a case of an unknown, but one where groupthink murdered 7 astronauts. The desire people have to conform to authority is extreme.
I can even think of a remarkable personal example from growing up, when a boy in class was given the punishment of having to sit next to an often-bullied girl. The teacher was not above the bullying, he had conformed to it. That must have had a detrimental effect on a young girl's self esteem.
So if we're still murdering, toeturing, and dehumanizing people in order to conform to concepts of authority and hatred, how much have we grown from our past?
Isn't this the same concept that we're supposed to disagree with when it came to the Nazis, North Korea, etc.? That we are free and just society, and we fight to defeat those who aren't?
1) I want validity that the pressures I feel are normal. Not so that I can give in to them, though I'm sure "excusing" past mistakes would feel good, but so that I feel the difficulty of doing the "right" thing is legit.
2) mostly, however, I want to excuse OTHER people. I want assurances that, however buggy, people aren't individually evil, that something pressures them in such a way that when my judgement of people (as a group) is inaccurate there is a "rational" reason for it.
Otherwise I can't explain things like neonazis, gamergaters, certain "news" and their zealots, etc. Without this one supposed flaw of human nature, the existence of these groups become far more terrifying.
The line that resonated with me was:
> According to Alex Haslam and Stephen Reicher, psychologists who co-directed an attempted replication of the Stanford prison experiment in Great Britain in 2001, a critical factor in making people commit atrocities is a leader assuring them that they are acting in the service of a higher moral cause with which they identify — for instance, scientific progress or prison reform. We have been taught that guards abused prisoners in the Stanford prison experiment because of the power of their roles, but Haslam and Reicher argue that their behavior arose instead from their identification with the experimenters, which Jaffe and Zimbardo encouraged at every turn.
So I don't think your second statement is disallowed by this article's findings. Most people randomly chosen off the street aren't necessarily evil - but some can be. And those who are can have a profound influence on those who aren't. Especially when there's a concept of a "greater good" on the line. (Insert Godwin's Law subject reference here).
For me, that's the big lesson - be suspicious of the "greater good."
[edited for formatting]
The strong do what they can, and the weak suffer what they must.
Erving Goffman's Asylums , in its four essays, identifies the creation of social roles and related rituals as the raw material of the social institution. In this case the institution was a real mental hospital.
My take on the kind of psychological experiment under discussion is that by studying an artificial social situation they have already failed. The petri dish of a very small mock prison on a campus just doesn't tell us anything very useful. Yet we can see eye catching results being amplified and then becoming part myth and part morality tale. However, the belief that human agency is contingent and flawed, that we often don't know how we know the things we know, or whey we do the things we do remains.
opposite. It shows that we have to be held accountable - without credible threat of such accountability the reprehensible things will be done.
>it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance.
not even close. It means that person A putting person B in the position of power without proper checks on that power and without effective system of accountability for the power abuse is guilty too of whatever power abuses person B would commit.
But you see how even if you believe the experiment argues that systems need to be held accountable, that still lets individuals off the hook? "It's not my fault, I was only following orders. Someone should have been holding me accountable." In fact, the article has a direct account of Zimbardo's experiment being used to defend a criminal. It turns out later that the defense was largely a lie.
i don't think systems create behavior. They do though unleash and amplify. They also can force. I don't think any doctor, with their deeply ingrained "do no harm", would administer a shock in the Milgram for example.
> "It's not my fault, I was only following orders.
I think following an order backed by a credible punishing force is separate subject.
>Someone should have been holding me accountable."
That comes as an attempt at defense when you're already being held accountable, and which is basically equivalent to an admission that you've made a error calculating your chances of being held accountable, and it is puzzling to see why/how some reasonable people would still fall for it and accept such admission as a defense whether in legal or in moral/ethical space.
Someone I know used to specialize in physically rehabilitating torture victims...for the sake of getting them put back together just enough so they could withstand further torture.
Systems create behavior. Its a large part of how cycles of abuse persist.
Ironic example... and wrong.
BTW is not for "believe", but for repent, change the old ways and continusly live good for the rest of life. And have faith... too.
As the article states, the SPE is often introduced uncritically in introductory lectures. Uncritically to avoid muddying the waters (presumably) and in an introductory lecture because it is fascinating and likely to capture the imagination of a student (presumably).
I’ve encountered unquestioning belief in various degrees of bullshit taught this way, from the tainted views of history taught in elementary school, through to stuff like this in undergraduate education.
I think that students are particularly vulnerable to this at an early phase of exposure to a subject because they don’t have enough background to be critical, by the time they get that background this information is ingrained.
Oh, the irony!
In surveys conducted in 2014 and 2015, Richard Griggs and Jared Bartels each found that nearly every introductory psychology textbook on the market included Zimbardo’s narrative of the experiment, most uncritically. Curious about why the field’s appointed gatekeepers, presumably well-informed about the experiment’s dubious history, would choose to include it nonetheless, I reached out. Three told me they had originally omitted the Stanford prison experiment from their first editions because of concerns about its scientific legitimacy. But even psychology professors are not immune to the forces of social influence: two added it back in under pressure from reviewers and teachers, a third because it was so much in the news after Abu Ghraib.
This seems to go up and down the field... From academic publishing (replicability crisis) to undergraduate teaching. Psychology just doesn't have a scientific perspective on truth and knowledge.
Instead of a need (Abu graib and timely relevance) for knowledge triggering research and the accumulation of knowledge, it triggers the field to accept bogus knowledge because they need something.
I'm not saying it's right. In fact in terms of evaluating the field "as a science" it's wrong. I guess that's why it's a BA not a BSc ... probably also why various other more rigorous subfields such as neuroscience have split off to become their own disciplines.
I think most people who have studied psychology will get this, and critically evaluating all the bullshit around you should be one of the fundamental skills that you learn. Unfortunately though, things are taken far more literally once they get "out into the open".
Therapeutic models have to be adaptive to the individual one is guiding towards mental wellness (or whatever the goal is). Someone undergoing the process of regaining trust in their own mind has to pass some of their own personally designed tests. Otherwise they are just consuming information that already exists, and that's back to the original problem - one can only trust in information that comes from others, and not from the individual self (unless it matches perfectly with others). That's paradoxical, and if all oneself can do is reproduce what already exists, one can not prove to oneself that one is different or has changed or has improved, from 'before therapy'. So psychology is going to keep changing. It has to.
That's what students of psychology are interested in, right? Why force the model they learned that at one point, made absolute sense, to the student, at a prior time, onto others? What is the purpose of that?
I think that stuff is obvious when it's out in the open, but every student, researcher, and doctor of psychology has their own fears and 'issues' to deal with. What if I get it wrong? What if I screw it up really badly for someone else? Didn't I go into this to help people? And I think those kinds of fears, if they are not dealt with in a responsible, respectful manner - can really control the dynamic far more than all the knowledge one has access to, and all the thought and reason one can muster.
The field has little to no scientific sense. It's why psychiatry/psychology is a soft science. It isn't based on the scientific method or reproducibility and testing. It's mostly based on consensus driven by a handful of powerful practitioners.
What sigmund freud did wasn't science. His oedipal complex theory isn't a scientific theory. There is no scientific test to refute it. You just accept it or not.
The biggest problem of our generation is the conflation of science ( hard science ) with pseudoscience ( soft science ). Because both has the word science in it, people give soft sciences far more credit than it deserves.
If anyone is interested, Richard Feynman had a very interesting interview about it.
They do attempt the scientific method, they just aren't doing a very good job of it much of the time apparently. If you think Freud, psychiatry and psychology are so synonymous that they can be referred to interchangeably, try picking up a current psychology textbook.
In my experience few laugh at him. Most respect him, even if they don't believe him.
A psychiatrist is a physician. They go to the same schools as future radiologists, surgeons, and general practitioners. They learn gross anatomy by dissecting cadavers. They work 80 hour weeks learning to perform medical procedures on actual patients. After that, as doctors, they receive specialized clinical training in years-long residency programs. They are licensed by state medical boards in addition to their own professional orgs.
A psychiatrist is authorized to hand you a prescription for antibiotics, for amphetamines, for opioids. Christ, a psychiatrist is authorized to detain you on their credibility alone for upwards of 48 hours, arguably exceeding the legal detention power of the police.
Psychologists are certainly competent, well-educated experts in their discipline, but they don’t and can’t do all that. Of course the two deal with loosely related subjects, but there is very little comparison between them in any practical sense.
It's one thing to appreciate what Richard Feynman contributed to physics, an another to understand his behaviour in striptease clubs. It's one thing to explain black holes and stuff, and another to analyze the correlation between IQ and success, BIG FIVE personality traits, disgust, political views and biases and what not.
Psychology today reminds me the state of medicine before bacteria were discovered: we do some semi-magical rituals that sometimes work, but they do not produce predictable results, because we are yet to discover the real underlying mechanisms of human mind.
Even in the hard science of physics, we we can understand the cause and effect, but the fundamentals of nuclear forces, gravity, magnetism are all a complete mystery.
But... It is simply not true that psychology practices academia as a non scientific discipline. The expirement. Publish data. Etc. It is just substandard data.
Freud Oedipal complex theory may just be nonsciece but this Stanford prison "theory" is scientific, it's just scientific fraud.
While most SU prisons were not insane hell movie-likes at first sight, it is still a wrong move to state conditions and effects similar to Zimbardo’s as disproven in a wide sense simply because he messed this one up.
Main problem here is that one probably cannot make such an experiment without actually building a prison and hurting people (and that they know their deeds will not be punished). The only reliable sources are 20th century books on civil wars and prison camps, but these are a read too long for HN format, and a common knowledge on that is so vague that isn’t even worth discussing.
That is pretty big difference in terms of meaning of results.
Even if you took the SPE at face value when it was first conducted, surely you have to repeat it before deciding if it is valid. What if the outcome only occurs once in every 100 times and they got lucky that first time? Surely that is somewhat important to know.
The most disappointing outcome is that nothing has really changed even after all this time. Each researched is only one great result away from making their career.
Like in other fields, academics are repeating experiments and trying to (in)validate results, including for SPE. 
That being said, social sciences != hard sciences in the sense that there are no hard laws because humans are unpredictable. That doesn't mean social sciences are worthless, but it does require that one approaches them with a different mindset than e.g. math or physics.
FWIW, I do agree that your criticism regarding the gap that exists from publication to repetition / validation has made many famous. Bombastic-sounding results -> headlines -> public perception -> increased social and academic status of the researchers. By the time the results are repeated and (in)validated it is often too late to reverse that. I don't think that pattern is unique to social sciences however - I keep seeing article after article contradicting each other - e.g. regarding drugs, food, fitness, etc.
Whether this is the mindset social scientists have, I can't tell - but social sciences in popular media seem to be the exact opposite of that.
I think it's just not a very human mindset. I've talked to natural scientists and social scientists alike. All of them want their research to succeed, they all want to get their PhDs, get good results for their postdocs, get publications going etc.
The incentive is just there to publish somewhat inflated results. If you could get a PhD for systematcally dismantling studies (which would be much more useful than most PhD studies), we'd have a replication crisis in every field.
And that is one of the major problems. Hubris and arrogance and thus bad requirements and expectations and thus cheating and lies and half-truths as normal behavior.
Also in economics: http://bilbo.economicoutlook.net/blog/?p=39198
Quote: "Further, in my field (economics) one can never really get a publication if the research only produces ‘negative’ results. That is, the researcher fails to find anything. I believe this is a common problem in other disciplines as well."
I think that in the hard sciences, things are very cut and dry - or at least that is the goal of a study. For example, in physics, the goal of experiments is to prove with a very high certainty that something is true or false.
Humans on the other hand are unpredictable. So if you run an experiment that says X, it may or may not replicate later, depending on hidden variables and assumptions.
Consider the famous marshmallow test. The latest studies suggest that it is not willpower but actually affluence that is the bigger determinant factor.  So that means that in future studies, they probably need to consider this variable and design the experiment in a way that they can control for it.
What is interesting is that for all the differences and intervening variables, humans can be studied and exhibit very predictable patterns. A good example of that is the study of power.  The Prince was written in 1532 and its principle continue to be just as valid today!
humans can be studied and exhibit very predictable patterns.
Seems somewhat contradictory. If humans can be studied and exhibit predictable patterns, why wouldn't we expect experiments to be repeatable, as the parent comment asked?
And if the experiments are highly random, then either you should be conducting more of them over and over to get a valid statistical sample, or you shouldn't be conducting them at all. Either way I see no valid argument that studies in social sciences shouldn't be repeatable.
As deyan mentioned in physics or chemistry, we get a high level of certainty after isolating all the variables. When it doesn't go according to plan, it's a known or unknown variable to blame.
It seems to me that behavioral psychology is still in its infancy in terms of identifying those variables and/or the threshold of Truth is much lower than sciences like chemistry.
Lots of patterns exist that have lower thresholds. Sports analogies are fairly illustrative. Hitters in baseball are considered great if they succeed 3 times out of 10. Then again, they run 500+ experiments a year (for hopefully many years) to determine their average...
I agree with your conclusion that experiments needed to be repeated more over time. I simply wonder what type of success threshold we will look at as the Truth in time.
For example, sound economic models are generally observable in the aggregate despite being imprecise, and high energy physics has many competing theories which are demonstrable but incomplete. The game theoretic principles of market analysis are reliable, as are the principles of gravity. Markets are generally efficient, and the model's conclusions have clear utility that matches real world conditions, even though small pockets of inefficiency also exist. It's fuzzy, but not unscientific. Forests are green, but some trees don't have green leaves.
In the abstract, we can tolerate some fuzziness or imprecision as a margin of error, but only if it's compartmentalized to some incomplete theories, and only as long as it's grounded and consistent. We cannot tolerate something being true one day and false the next. Green forests cannot inexplicably and inconsistently become orange without threatening our claim that forests are green.
I don't really have an opinion on psychology in particular, though it's pretty clear there's a reproducibility crisis. But as a direct response to your thesis: arguing that a "different mindset" is required to scientifically study subjects which are unpredictable is an untenable position. If humans actually are fundamentally unpredictable - whether due to intrinsic non-determinism or a present lack of sufficient data - they cannot be empirically studied. At that point we're no longer compartmentalizing incompleteness or fuzziness in otherwise sound models. Instead, we're compartmentalizing otherwise sound observations in a sea of chaos.
Faced with this sort of reality (and I take no position on whether it is the reality), any scientist, in a "hard" or "soft" discipline, would have to examine if they can reasonably acquire enough information related to the thing in question to make any determination in good faith. An unpredictable thing is an unknowable thing; you may as well try to resolve the three body problem.
An alternative interpretation is removing the genetically inherited delayed gratification component that pays off financially in modern life.
It isn't. This is a problem of presentation to nonexperts. Talk to grad students in psych and they will have an accurate understanding of the SPE and its place in the literature.
I think the ones that gained wide currency were the ones that most excused truly horrific and servile behavior by individuals.
I don’t believe for a second that someone who would torture or abuse or kill someone on the orders of someone else was ever a ‘good person’.
Don’t tell me that someone who is talking someone’s child from them right now is a good person. I quit a job just recently working for dhs just because I didn’t want to participate in that kind of behavior even tangentially.
That's easy to say. But context matters a lot!
What about this: your family is held hostage, or you live in a dictature where disobedience means at best prison, and often retaliation against your family. Real example: I've met a guy -a political refugee- at work whose daughter was ABDUCTED by the police of his country to put pressure on him! Can you imagine his anguish?
Now, you are ordered to go in this room and "extract a confession" from someone or to kill them. What would you do? (Knowing that there are NO CHANCES that you can escape/run without awful consequences at least on your family, it's not a movie).
Same for going to war. Many times in history have the poorest been forced to go to war under pain of death (and possible retaliation on their family). What would you do in such a situation? Especially when you are poor and without "connections".
Truth is that it's very easy to force people to do awful things by the use of force/coercion. Very few will want to be martyrs or put their family at risk.
I tend to support the parent commenter's opinion that torturers were never good people to begin with. I can understand murderers, but not torturers, and especially not torturers of people who haven't directly wronged them (e.g. when it's an "interrogation").
Extreme examples such as "but what if your daughter was raped and murdered by your prisoner" tend NOT to be the case of real life scenarios where torture actually happens.
Fair enough. How about "not a monster"?
The argument being that there was something wrong in the German character that made it easier for someone like Hitler to get "good people" to do bad things. Experiments were run in the US as a test and control, and the results were well-established long before Zimbardo's ridiculous non-experiment.
We only need to look at the behavior of ICE agents to see real world confirmation of Milgram's experiments.
When your field can only have less than half of its top papers reproduced in full, something is very wrong with your field. It's to the point where one can safely assume that any psych paper (but especially social psych) is garbage until such time as it's been replicated a time or two.
Good, because it is probable that what we should assume this for any science.
In reality, the current scientific culture is one of rampant distractions, where pretty much everyone is running around like a beheaded chicken. Woe unto those that might point out that a prominent researcher with tenure isn't using sound methods. That, as well as a generally toxic culture that makes Wall Street look good, was part of why I left.
There have been some efforts towards creating a better culture (in particular, I've had my eye on Popper  and the "SciOps" movement and Software Carpentry , as well as various open access efforts). At the end of the day though, scarcity in terms of funding plus lack of job security plus medieval power dynamics are holding any substantial cultural shift back.
What that says is that e.g. 69% of scientists working in physics and engineering had failed to replicate some experiment at least once in their career. In other words if they performed 1000 replication attempts of other scientists' experiments and failed to reproduce the same results once, they would be counted in that survey. By contrast in psychology, one hallmark experiment tried to replicate the findings of 100 key studies in psychology from highly regarded psychology journals. And it was unable to replicate the findings of 64% of the studies. What's that number for physics? No idea other than that it's going to be much lower.
Make sense? Wiki is, almost as a rule, just completely awful for contemporary or potentially controversial topics and this is a textbook example of that.
I think this speaks more to the nature of academia, and the commercialisation of publication than the validity of the field in and of itself.
More recent critical analysis has poked a lot of holes in Milgram .
Important work no doubt but not quite the smoking gun it has been proclaimed to be.
> It’s a phenomenon that’s been used to explain atrocities from the Holocaust to the Vietnam War’s My Lai massacre to the abuse of prisoners at Abu Ghraib.
This sounds an awful like the sentence in the Lifespan of a Lie:
> It has been invoked to explain the massacre at My Lai during the Vietnam War, the Armenian genocide, and the horrors of the Holocaust.
We seem to be using a lot of these experiments to explain the same things ...
It seems to me that a lot of the seminal social psychology studies happen around this time ...
I mean, just because it's in a psychology textbook doesn't necessarily mean that's it's BS... But that's not a decent standard. Even astrology clears that bar.
Likewise, if the SPE is invalid, it doesn't imply that average people aren't capable of horrific behavior - it simply means we have less evidence to determine that than we thought we did.
Agreed; SPE just doesn't constitute evidence for Zimbardo's particular narrative.
>According to Alex Haslam and Stephen Reicher, psychologists who co-directed an attempted replication of the Stanford prison experiment in Great Britain in 2001, a critical factor in making people commit atrocities is a leader assuring them that they are acting in the service of a higher moral cause with which they identify — for instance, scientific progress or prison reform.
This quote, and everything actually observed during the SPE, is consistent with Milgram's work, which has been replicated a zillion times.
I think you may be cherry-picking quotes here, here is another from much later in the article:
> In another blow to the experiment’s scientific credibility, Haslam and Reicher’s attempted replication, in which guards received no coaching and prisoners were free to quit at any time, failed to reproduce Zimbardo’s findings. Far from breaking down under escalating abuse, prisoners banded together and won extra privileges from guards, who became increasingly passive and cowed. According to Reicher, Zimbardo did not take it well when they attempted to publish their findings in the British Journal of Social Psychology.
> Man: I can only say that I was- look, I'm willing to do anything that's ah, to help humanity, let's put it that way.
> Williams: Right, that's what we're doing.
> Man: I've got- I've got a child that's a cerebral palsy child.
> Williams: Have you really?
> Man: And you know they're experimenting steadily on trying to find a cure for it. It's a sad thing.
From his correspondence we know these experiments were very personal to Milgram, motivated in no small part by his desire to understand the Holocaust. Yet these experiments were not testing obedience to generic authority, it wasn't testing obedience to military officers or gestapo thugs. The above man was compliant because he thought he was helping to advance medical science. The whole thing is farcical.
Gestapo were top police and believed that they are enforcing the law as they are supposed to.
Yes, this seems likely. Which flies completely in the face with the tradition narrative of the Milgram Experiments: that people follow the orders of authority figures blindly.
What the Milgram experiments actually showed is that people are willing to do something they normally wouldn't if they believe in the cause. The teachers in the Milgram experiments believed in science and were willing to step outside their comfort zone to advance science. The Gestapo believed in their cause too, and were similarly willing to do things a man otherwise wouldn't.
In both of these cases they weren't simply doing what they were told, they were not blindly following orders as Adolf Eichmann had unpersuasively pleaded.
Definitely agree on that one.
Second, presumably the people in the control group without any apparent authority figure present would also have assumed it was a scientific experiment. So I'm not seeing how you think this ruins the experiment.
Finally, one of the Nazi justifications for the Holocaust was an attempt to advance medical science. So this test would still fit his motivation, though it does muddy any conclusions you can draw.
I don't doubt that if you replicate the experiment you'll replicate the results. But the experiment isn't proving what it's said to prove. On the contrary, much of the experimental results contradict the mainstream narrative told about the experiments: http://journals.plos.org/plosbiology/article?id=10.1371/jour...
That feels intuitively ridiculous, and has no evidence I know of to support it, except this study.
That said, these group experiments are very hard to control (as in control for confounding variables/factors), and very easy to screw up methodologically (maybe you don't pre-register one interesting hypothesis). Very hard to design properly (forget to control lighting), and even harder to run it well (the building where you want to do it doesn't have adjustable lighting, or you want cold water showers only, but it's not that simple to achieve, so you don't and just mention it in the paper - was that the cause of a negative/positive result or not?).
What alternative explanations would you propose?
2) We have both experimental and factual evidence that shows that Zimbardo's findings were lies.
3) The entire nation of Germany did not go berserk with hatred and sadism.
4) Name one instance where Zimbardo's findings have been born out is "so many different contexts"?
The rise and success of Nazism (like most social movements, including 'Trumpism') was the result of a confluence of a number of factors. To pretend that it was just "how people are" is do both people and the truth a disservice.
From Behind the Shock Machine by Gina Perry:
>On the recording, a much younger Hannah sounded perky and confident as she talked to Williams at the beginning of the experiment. But you could hear her confidence getting shakier once the learner started to give incorrect answers, and it became clear that his memory was not reliable. By the third shock, 45 volts, she was stumbling over the words. At the fifth shock, 75 volts, when the learner made his first sound of discomfort, there was a pause. Then I heard the following exchange.
>Hannah: [to Williams] Is he all right? [into the microphone] The correct one was "hair". [to Williams] Is he all right?
>Williams: Yes, please continue.
>Hannah: All right.
To put it lightly, the Milgram experiments were trash. It was measuring trust in scientists not compliance to authorities. What undergrads are taught about Milgram is that the experimenter would ask the teacher to "Please continue" but what actually happened here is the experimenter assured the teacher (Hannah) that the learner was all right. The experimenters played fast and loose with the procedure, saying whatever they thought was necessary to get the teacher to continue. And sometimes it wasn't even measuring trust in scientists; there is at least one learner who recalls the experimenter, in response to their protests, asking if they'd like to swap places with the learner (essentially threatening the teacher to get them to comply.) The experimenters and particularly Milgram knew what results they wanted before the experimentation even began, and made sure they got those results.
Now, what those results mean / what is being measured by them is another issue.
Probably the best examples would be found in the in-group/out-group dynamics in religious cults and closed-state regimes. The Nazis exploited these principles in the 1930s to impressive effect. But what happened in Zimbardo's experiment is more closely analogous to what you see in places like North Korea today, along with abuses carried out in insular groups ranging from the Branch Davidians to the Mormons. The same hierarchies form and the same roles are played.
It is foolish in the extreme to dismiss Zimbardo's work because of a few ethical hangups. What he did is highly reproducible, but only outside the bounds of regulated academia.
Milgram has indeed come under similar criticism from people who are desperate to find another explanation -- any other explanation -- to distract themselves from the truth about what human beings actually are.
However, you have gone too far in the other direction. You make it sound like leaving a religion is just as easy as unsubscribing from a newsletter.
Visit one of the ex-religion subreddits sometime. I know Mormonism and Christianity both have one. To be fair, those online groups will be selection biased towards the most difficult situations. But I think many of those experiences are not terribly uncommon.
Either he was a crazy old coot and we can safely say that present company has moved past his behavior in which case its hard to credit a religion solely based on his revelations or he was a true prophet and we ought to analyze their nature of the organization based on his words and deeds rather than just present lukewarm implementations.
Look up blood atonement whereby killing sinners presumably including heretics was the proscribed solution.
You means groups that use heavy levels of indoctrination to induce this behavior? hat is the exact opposite of Zimbardo's claim that all you have to do is randomly assign someone a role and they will exhibit this behavior without prompting.
> It is foolish in the extreme to dismiss Zimbardo's work because of a few ethical hangups.
His work is not dismissed because of "ethical hangups" about how he treated experimental subjects, but because he has repeatedly lied about and misrepresented the experimental protocol he followed. Lying about experimental protocols are some of the BEST reasons to dismiss a scientists work.
> What he did is highly reproducible, but only outside the bounds of regulated academia.
Except the study he claims to have run has been reproduced and did not have similar results.
> What he did is highly reproducible, but only outside the bounds of regulated academia.
We have evidence that he didn't actually do what he claimed to do.
We also have evidence that if you do what he claimed to do, you get different results.
> Milgram has indeed come under similar criticism from people who are desperate to find another explanation
Miligram did excellent science. Unlike Zimbardo, he did careful follow studies to try to understand and elucidate the results he found and was generally careful about characterizing the conclusions that could be drawn from his work. Milgram's studies and the conclusions he drew from them are widely misrepresented in popular culture by people eager to provide an excuse we can use to absolve ourselves of responsibility for "just doing our jobs".
SPE now needs to be taken as a cautionary tale about ethics and science, nothing more.
Thanks in advanace!
This does more than just withdraw some evidence in favor of the idea that ordinary people can easily commit horrific behavior. It provides new, positive evidence that people have an incentive to spread this idea fraudulently. This should properly cause us to re-examine all evidence for this thesis very closely.
I'm not saying it means Ivy League students are more likely to be psychopaths, but I am saying it's not scientifically valid to take a study of a population from a hyper-elite American college during a draft war and apply the results of that study to the average person, especially if the study is supposed to remark on factors regarding social pressure and social expectations.
"We knew [the guards] couldn’t hurt us, they couldn’t hit us. They were white college kids just like us, so it was a very safe situation. "
> If an experiment is run that says the sky is blue, then its invalidation does not mean that the sky is not blue.
If an experiment's hypothesis is that the sky is blue and the sky is indeed blue, then there is nothing wrong and there is nothing to be invalidated.
If an experiment's hypothesis is that the sky is blue because there is a bunch of leprechauns throwing blue Skittles at it, and there are evidence that this not the case; then the experiment is to be invalidated and, in your own words, "[i]t means we no longer have evidence for that conclusion."
If you don't have the evidence to support your hypothesis, then you can't draw any conclusions except for, and at best, that there is no evidence to support your hypothesis. To put it into the context of the running example, this translates to the sky is blue but it's not because leprechauns are throwing blue Skittles at it. You can't say just because my hypothesis about leprechauns is invalidated, it doesn't mean that there are not leprechauns throwing skittles at it—because that's simply just what you believe and wish to be the case; and a good scientist should be impartial to beliefs.
More importantly, the whole point of the article is beyond invalidating the SPE. It's, in my opinion, a great piece about much deeper problems in scientific or, well, "scientific" methods.
Actually he probably thought he was doing a good thing. He didn't intend to harm people, but to help people (by creating a fraudulent narrative that would work towards the greater goal of prison reform), with enough fame and personal profit to sweeten the deal. It seems likely to me that Zimbardo believes the ends justify the means.
Still, it would be satisfying to see him prosecuted for this fraud.
It reminds me of a short story illustrating this beautifuly.
(From The Way of Kings)
The drug war in a nutshell ladies and gentlemen.
There are countless other quotes of guards and prisoners saying they were "just acting". But isn't that equally as powerful? I'm not sure it matters so much why you're locking someone in a closet for 6 hours (just acting, wanting the experiment to succeed, wanting to please your teacher, having sadistic tendencies, etc.), it's that you followed through and performed that action. The fact that you did something you wouldn't normally do might be even more significant. It's one thing if the guards and prisoners all know they're acting, but that doesn't appear to be the case. It sounds as if individuals were acting on their own accord, but unaware what behaviors and instructions were real vs. not real.
Even in real life, how many people in the hierarchy of a dictatorship are "just trying to please" the dictator vs. really "buying in" to the philosophy? I guess it matters in terms of trying to change the system in the future, but it doesn't matter that much in evaluating the harm done to the powerless in the past.
I don't think power always veers towards dangerous dominant/submissive behavior, but history seems to have an amply supply of evidence that that certainly can be the case, regardless of why it gets there.
It seems entirely possible that "just acting" was an after the fact justification. People are very unreliable witnesses of their internal processes, this is probably even more true when someone has reason to feel guilt for their actions.
>I'm not sure it matters so much why you're locking someone in a closet for 6 hours (just acting, wanting the experiment to succeed, wanting to please your teacher, having sadistic tendencies, etc.)
Fake it until you make it, torturer edition.
Especially decades later, I'd be very reluctant to naively credit people's explanations here.
I could believe that the SPE doesn't really get at it, but people do end up doing horrible things for structural reasons. That doesn't change the incentives for trying to explain that to people who weren't there.
We have separate events, and we try to compare them, and use them as "evidence".
I don't think the SPE is great science, but people are very easily driven by circumstances, and it'll take a lot of time to fully unpack the dynamics and ethical consequences of that.
The Milgram effect (superiors asking you to cause pain because science + authority) has been replicated many times.
This is all explained quite well in the article.
>"Over 700 people took part in the experiments. When the news of the experiment was first reported, and the shocking statistic that 65 percent of people went to maximum voltage on the shock machine was reported, very few people, I think, realized then and even realize today that that statistic applied to 26 of 40 people. Of those other 700-odd people, obedience rates varied enormously. In fact, there were variations of the experiment where no one obeyed."
-Gina Perry, author of Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments. https://www.npr.org/2013/08/28/209559002/taking-a-closer-loo...
(I've not read the book yet, but I just got a copy.)
It's been a while since I studied this, but the most interesting part of the Milgram experiments are not that this behavior can be induced but the follow up studies which studied exactly which elements are most important to this inducement.
Well, to many people that hear about it for the first time, the mere fact that this behavior can be induced is shocking. It goes against the narrative that there are "good" and "bad" people.
1. The actual reality: this is an ethical experiment and no one is at any risk
2. The pretend actual reality: someone is being (potentially) fatally injured - certainly being hurt beyond the point of toleration
3. The false reality: - everything is fine because of science
Almost everyone is concerned that they may be in reality 2 - they ask the researchers who say some variation of "please continue". This leads them to the entirely correct conclusion that this is a safe environment and they can continue.
However, they are meant to have concluded that they really are in reality 2 and we judge them accordingly - even though they were never in reality 2, and never did anything horrifying to anyone.
But why should they consider the other person their responsibility - surely the other subject can just say "that's enough I don't care about science enough to carry on"? If the torture subject can't freely decide to stop then that means the "torturer" is probably also in mortal danger!
Because that's what we hope to expect from people in a society: to not be bystanders when others are harmed, and not be accessory to such harm.
>- surely the other subject can just say "that's enough I don't care about science enough to carry on"?
Maybe yes, maybe no. Maybe the researchers are blackmailing the other subject or holding them against their will.
What you know is that they seem to be in great pain.
It boils down to this: do you believe your eyes and common sense, or what the authority tells you?
The experiment shows that many opt for the latter - and that is scary.
>If the torture subject can't freely decide to stop then that means the "torturer" is probably also in mortal danger!
History has shown us that this is not the conclusion the people make.
Remwmber "First they came for _____, but I didn't say anything because I was not a ____"?
>It's unlikely that the "teacher" subject believed #1
There is ample evidence that many of the teachers believed the experiment to be a sham and understood that nobody was actually being harmed. Some of them noticed inconsistencies in the experimental procedure, such as the learner not being paid, flaws in the narrative such as the learner with supposed heart problems being far too willing to be subjected to shocks for beer money, or general incredulity at the entire circumstance including unwillingness to believe Yale would permit such obviously unethical experimentation.
>is strongly suggests that people are willing to ignore real suffering, if someone in authority denies the suffering
It only shows that if the learner is persuaded it's being done in the name of a cause the learner already believes in.
>and that is seen throughout politics, religion, and business...
People who view all of these things through the lens of traditional narrative of the Milgram experiments are doubtlessly discounting the role true belief in the cause plays. Adolf Eichmann claimed he was just dispassionately following orders, but do you really believe that? You shouldn't. The traditional narrative of the Milgram experiments would have you believe just following orders is normal human psychology, but actually the experiments did not show that.
Do they? I agree with your view on the takeaway from the Milgram experiments - that conviction in cause is the necessary factor in addition to the nudging from the authorities.
Does it makes it any better?
Replace (science + man in a lab coat) with (religion + man in a robe) or (political ideology + man wearing the right insignia) or ("we are saving the world" BS + manager in a $80 T-shirt), and you'll get the same results as long as the cause is believable.
If you are the one being hit with a stick for nothing, does it really matter to you that the one who did it truly believed that it was for some higher cause - without pausing to think how perhaps this is not the best way to get there?
A Nazi who is a true believer is emphatically not better than a German who was merely following orders. I think most would say he's actually worse.
However an observer who recognizes the of role belief is better than an observer who discounts belief and views the situation only as Milgramian obedience to authority. The former observer is better than the later because he has a more accurate understanding of the nature of the problem, and is therefore better equipped to deal with it.
I'll also say this: I personally believe dedication to a cause is amoral. If it's the right cause, dedication to it is generally moral. If it's the wrong cause, dedication to it is immoral. It's not dedication that made true believer Nazis evil, but rather the cause to which they had dedication. Many of the men who died fighting the Nazis were similarly dedicated to their causes. Furthermore, specifically relating to the Milgram experiments, people who are dedicated to a good cause can be mislead into behaving in an immoral way. If we didn't recognize that, we might be left to conclude that apathy is virtuous.
He was measuring "obedience to authority" insofar as "authority" meant trust in scientists credentialed by Yale. One variation on the experiment following the same script as the infamous variation tested obedience rates outside of the Yale campus and, of course, there was significantly less compliance.
Or put another way
>'according to academic Don Mixon, Milgram didn't measure immoral behavior in his lab. On the contrary, he argued that what Milgram measured was misplaced trust. [...] Don found the same results as Milgram but came to completely different conclusions. he argued that it wasn't immorality that drove Milgram's subjects to flip the switches but trust in the experimenter, who, despite the cries from the learner, calmly told them to continue and gave the impression that there was nothing to worry about. [...] According to Don, Milgram simply measured the faith that people put in experts: "He found just the opposite of what he thought he found; nothing about the subjects' behavior is evil. In fact, people go to great lengths, will suffer great distress, to be good. People got caught up in trying to be good and trusting the expert. [...] The only evil in the obedience research, Don came to believe, was "the unconscious evil of the experimenters"'
Even more so, Don's perspective is even more shocking. The people doing something that (to their best knowledge) hurts the other person and hurts them (less so, but still) because their authority person told them its for the greater good?
I'd take a sadist that thinks for themselves instead.
In either case, it's far off from the "people follow orders" narrative. When teachers in the Milgram experiment were confronted with direct orders their compliance rates plummeted. They were compliant when they believed they were cooperating Milgram, and uncompliant when they believed they were being ordered by Milgram. During their periods of compliance they identified with Milgram by sharing the common goal of advancing science. The forth prod, the direct order ("You have no other choice, you must go on") caused any perception of a shared identity between the teacher and Milgram to evaporate and with it, their compliance.
..which is something a volunteer for a science test sees as a good thing.
>In either case, it's far off from the "people follow orders" narrative.
Indeed. I always took it along the lines of "when put in a setting where it's acceptable to harm others consequence-free, people need little to do so".
This doesn't seem to contradict what you are saying.
Actually, it doesn't really. The experiment says far more about the importance of perceptions of authority than it does about "good" or "bad" people.
I think you misread. The results of the Milgram experiment (that authority is more important) are what go against the common narrative that there are "good" and "bad" people. You're agreeing with him.
The experiment was not about how clothing affects behavior but was about presenting a group with power and encouraging them to use it. The uniforms were important in establishing group identity, but not the sole or even primary focus of the experiment.
I think the point is, OC is presenting the experiment as something that comes about as a result of the setting (including uniform). But, I think from the researcher's and participants perspective, the instructions played a significant role.
> For Korpi, the most frightening thing about the experiment was being told that, regardless of his desire to quit, he truly did not have the power to leave.
> Another prisoner, Richard Yacco, recalled being stunned on the experiment’s second day after asking a staff-member how to quit and learning that he couldn’t. A third prisoner, Clay Ramsay, was so dismayed on discovering that he was trapped that he started a hunger strike.
> a taped conversation between Zimbardo and his staff on day three of the simulation: “An interesting thing was that the guys who came in yesterday, the two guys who came in and said they wanted to leave, and I said no,”
So, yeah, the instructions were a huge part of the experiment. People wanted to leave and couldn't. The action/reaction between the participants ("I want to leave"/"No.", complain/6 hours in solitary) probably made the scenario much more real than the hallway posing as a jail or the fake uniforms.
So, figuratively or literally I don't think it was about what happens when you put on a uniform. I think it's more about what happens when someone tells you to follow a set of rules that create an empowered/disempowered dynamic.
The experiment was planned to last three days. It was stopped after less than a day and a half. I remember the principal coming on over the PA and announcing that the experiment was ending immediately and that treating any "Others" any differently from then on would result in a suspension. There were students showing up in the office crying about it. Not just complaining. Literally in tears.
The problem, primarily, was that it was a middle school. There's already a group of students that all the other students mistreat or pick on or don't respect as well. There's already a social pecking order. When some of those low social rank students were assigned to the "Others" group, they really got bullied by people. What I remember being the most shocking was how several teachers were bullying and treating students badly. Yelling at them or punishing them for no reason at all. Just because they had a paper sign that said "Other" on their shirt.
The thing that this drilled into me is how vile people can be to each other when they think they deserve it or otherwise aren't deserving of basic respect or equality. The older I get the more I look back on this little experiment and am shocked by what happened. Xenophobia and sectarian divisiveness is a remarkably easy way to dehumanize and strip other people of basic human rights. It's really quite terrifying how easy it was and how quickly it happened. How all the students and staff just readily accepted the new social order because that's what the authority said was true. How people abused that social order for no good reason. How people who were ostensibly pretending were actually acting in horrific ways. It's difficult to know in the moment what's acting and what isn't when it's a stranger doing it.
A relatively recent example is the “it’s OK to punch nazis” meme, but there are many others. (If you haven’t heard about this, it’s the idea that it’s perfectly good and admirable to walk up to people and physically assault them merely because of their stated horrific political opinions.)
It’s difficult to even talk about this, because the backlash is so strong. If someone is against punching nazis (for instance), are they defending nazis? Are they also then, by extension, a nazi, and deserving to be punched? As soon as there is an actual “Other” group which is unprotected by social norms, any discussion is almost pointless.
... or when they are told to do so, told what bad things to do (which you even relate yourself) and given a schedule of bad actions to take, and moreover told that what they are doing was a positive good, in this case the advancement of education of all students, and just a scientific experiment. Missing these parts out is what is being discussed both in the headlined article and in other parts of this very discussion.
The acceptance here is in fact the uncritical acceptance that such experiments are valid, constructive, and not biased by forcing a particular desired outcome in order to support some agenda, or by ignoring an uncontrolled external factor that skews the results. Whereas one should have questioned what your social studies teacher was trying to demonstrate, and whether the experiment was correctly fashioned for demonstrating it in the first place.
It is time to re-visit what was drilled into you in middle school, more critically.
when you're trying to understand someone else's "bad" actions, it's important to realize that they don't see themselves as a bad person, that their rational brains will wrap memories around an inherent belief in their own "goodness". that comes from a regular failing in how we narrate our lives into simple categories of "good" and "bad".
people make mistakes constantly, and are constantly making up for them. social life is an endless chain of conflict resolution with ourselves as the protagonist.
Psychology has always played in multiple ponds. Expirement based social science. Explicitly unscientific^ theory (eg Freud). Quite a lot in the practice/medical aspects. A lot of philosophical approaches.
Overall, I think this has added up to a result where when people in the field say "we know X to be true," it's hard to know what they mean, nevermind if they're correct. Do they mean it in the sense that a literary critic means something, or the way a chemist means something.
For example: Atlas Shrugged & Animal Farm are both works of fiction and of political "science". This is fine in the frame of what political science is, a non scientific field. No one since Marx has really claimed otherwise.
When Marx claimed his theories to be scientific, the discussion that followed actually resulted in very important pieces of modern epistemology: what is science. The critics of Marx were also critics of Freud and the criticism was identical.
Psychology though.. it has remained in a sort of no man's land. I know that I at least have really lost confidence in psychology, as an academic field. Practice/therapy is a completely different story. I think there's been a lot of advancement in therapy. I can't help but wonder though, how much it is hindered by bogus "science."
^in the Karl Popper sense
Not saying that Freud is a fraud (or even Jung or Maslow), but their theories are hard to replicate and have been gradually phased out as a real explanation of events and furthermore their psychiatric/psychotherapy diagnoses are not effective to diagnose actual psychological illnesses, their treatment techniques on the other hand have been invaluable (https://www.mentalhelp.net/articles/mental-health-and-the-le...). Psychology is definitely going thru a replication crisis. (https://en.wikipedia.org/wiki/Replication_crisis#Psychology_...)
The only recourse to avoid calling him a fraud is to call him a fanatic.
Then again, I'm probably just stating another unproven assumption.
This is odd because it's the exact opposite of the way I usually here the SPE narrative discussed: it's not a tale of redemption, it's a condemnation of humanity that purports to show how any of us have monsters inside of us that come out if given the chance.
Lots of people in the world have died rather than to go along with things they felt were wrong.
Don’t think that we all have monsters inside because we don’t.
I think that says a lot about the real nature of us italian, and I'm profoundly ashamed to be born one
The entire western culture is about ego, and ego loves to feel part of some team or group. Standing alone is very hard for most people.
Brb though, my manager wants me to put clicktale on our website. My burndown chart is gonna look so good this week
In other words, are the guards behaving like jerks because their power has corrupted them, or because behaving nicely is likelier to lead to them getting attacked by prisoners?
Another consideration is that while I'm sure abuse in prisons is rampant hopefully it's still far from the majority of guards behaving poorly. If that's true, then real life prisons might be evidence against this conclusion as most (hopefully) prison guards wouldn't be jerks.
I suppose we could look at jails, then. They'd mostly be full of people still presumed innocent, and most of those who are guilty aren't guilty of a violent offense. If you just wait a while, you can find out which of those presumed innocent people people were found not guilty. It'd be interesting to see how the guards treated those people.
Abu Ghraib doesn't boil down to uniform and a prison setting. The perpetrators believed themselves to be in the right and Iraqis to be deserving of punishment before they were ever placed in the prison setting. It doesn't map onto the SPE at all, despite countless commentators drawing comparisons between the two when the Abu Ghraib story broke.