Hacker News new | past | comments | ask | show | jobs | submit login

I'm just gonna be lazy and link to what I wrote the last time the Rat Park page came up on HN:

https://news.ycombinator.com/item?id=7743089

It comes up a lot, maybe because the result is kind of positive and aligns well with HN's crowds drug liberal views?

The RP experiment wrt morphine addiction in mice has not been replicated. Also, afaik, Bruce Alexander had a hypothesis about drug addiction, designed an experiment to prove his hypothesis. Performed the experiment, measured the results and found that they confirmed his hypothesis. It's not a good way to do research. The results of the Stanford Prison Experiment and the Milgram experiments should be discredited for the same reason. Because their results were tainted by their designer.

Extraordinary claims requires extraordinary proofs. That mice wouldn't become addicted to morphine is most certainly an extraordinary claim.




>the Milgram experiments should be discredited for the same reason

Actually, the original hypothesis Stanley Milgram had was that Germans were somehow predisposed to obedience as a culture or race. His studies in the United States were what's called a "pilot test" (https://en.wikipedia.org/wiki/Pilot_experiment) conducted to verify if everything was working as expected. Once he had some control readings for Americans, Milgram planned to go to Germany and conduct the real experiment.

Milgram, and virtually all of his colleagues, believed that nobody would obey until the end of the study. He polled his colleagues, and the highest number ANYONE gave was 3%.

If you're not familiar with the Milgram experiments, take a minute to read up on them. Roughly 60% completed the experiment, obeying orders until the end.

For what it's worth I think you're wrong about conducting science, maybe right about the rat park, but don't bring the Milgram experiments into it. They are possibly the most valuable result science has ever given us.


http://lareviewofbooks.org/essay/psych-lies-and-audiotape-th...

According to Gina Perry, she has found clear evidence of cheating in one of Milgram's 23 experiment series. It's of course possible that the other experiments were carried out properly but I think one bad apple spoils the whole bunch in this case. Even if other experiments arrive at similar results it doesn't change the fact that the original may be tainted.


> had a hypothesis about drug addiction, designed an experiment to prove his hypothesis. Performed the experiment, measured the results

That's the very definition of the Scientific Method, so I don't understand what problem you're pointing out:

"The overall process of the scientific method involves making conjectures ( hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments based on those predictions." -- from http://en.wikipedia.org/wiki/Scientific_method

(That the experiment has not been replicated as you claim could be a bad sign, but that's a separate matter.)


The problem is designing an experiment to confirm the hypothesis rather than to test it.


This is not a problem. You can have any hypothesis and any test. If your test has validity it's a valid way to test the hypothesis. That's it.

The scientific process is a social one, and if you feel that an experiment is constructed unfairly, you can devise another experiment to falsify it.

Saying that an experiment "confirms" a hypothesis is just a rhetorical trick of the comment parent. Experiments can only ever falsify.


I think what the parent means is that a motivated experiment designer can (even accidentally) create an experiment that has a high false-positive rate, thus providing very little Bayesian evidence given a positive result. Ideally, you'd have the experiment designed by someone who actually wanted to falsify the hypothesis (or at least a neutral party), such that the non-null conclusion, if arrived at, would be really strong Bayesian evidence.


This is subtle but important distinction. It is absolutely possible to do a confirming experiment that can give misleading results. There is a nice explanation in the wikipedia article under "Confirmation Bias".

http://en.wikipedia.org/wiki/Confirmation_bias

A striking example is the (2,4,6) test. From wikipedia:

"Wason's research on hypothesis-testing The term "confirmation bias" was coined by English psychologist Peter Wason.[66] For an experiment published in 1960, he challenged participants to identify a rule applying to triples of numbers. At the outset, they were told that (2,4,6) fits the rule. Participants could generate their own triples and the experimenter told them whether or not each triple conformed to the rule.[67][68] While the actual rule was simply "any ascending sequence", the participants had a great deal of difficulty in finding it, often announcing rules that were far more specific, such as "the middle number is the average of the first and last".[67] The participants seemed to test only positive examples—triples that obeyed their hypothesized rule. For example, if they thought the rule was, "Each number is two greater than its predecessor", they would offer a triple that fit this rule, such as (11,13,15) rather than a triple that violates it, such as (11,12,19).[69] Wason accepted falsificationism, according to which a scientific test of a hypothesis is a serious attempt to falsify it. He interpreted his results as showing a preference for confirmation over falsification, hence the term "confirmation bias".[Note 4][70] Wason also used confirmation bias to explain the results of his selection task experiment.[71] In this task, participants are given partial information about a set of objects, and have to specify what further information they would need to tell whether or not a conditional rule ("If A, then B") applies. It has been found repeatedly that people perform badly on various forms of this test, in most cases ignoring information that could potentially refute the rule."


Yes their rules might be more specific than the general rule, but that is not a problem. Their rules were a correct subset of the more general rule (if what you are describing is accurate). Now if they are claiming a broad hypothesis and only providing a set of data that asserts a subset of the hypothesis, that is a problem. They are being misleading one way or another. If the researcher is presenting a hypothesis and misses out on data (for whatever reason), then somebody else will (ideally) point this out. Nonetheless, just acting like this misrepresentation can happen therefore don't trust some particular study is little more than baseless criticism.



I don't see the problem with the design of the RP experiment; it might still falsify the hypothesis.


They tested the hypothesis that heroin is addictive by itself, the one supported by the self-administration experiment, and got pretty unexpected results. As far as the scientific method is concerned, their experiment looks ok to me.

It's interesting to note that the criticism of the Rat Park experiment uses exactly the same reasoning Rat Park designers used against self-administering experiment, namely that one of seemingly innocuous parts of experimental setup (isolation and genetic variance respectively) was causing a major results bias.


"Test" just means "confirm or deny."


Some experiments are designed to "confirm or deny", while others are designed to "confirm".


I guess that's one way to interpret the phrase "designed to confirm," but it's also a fairly natural way to describe a legitimate test. I might, for example, say that I "designed an interview process to confirm that candidates are qualified," and of course it's clear that the process will either confirm or deny.


Nobody invests time and money in a hypothesis they don't already have an educated guess it might be true.


The issue is that there's a conflict of interest when the same person who proposed the hypothesis is the same person attempting to prove that hypothesis. This sort of bias is why meta-analysis exists.


Real life experiments with drug decriminalization and other factors such as varying rates of addiction tied to life circumstance provide circumstantial support that addiction is at least not entirely chemical.

"Liberal drug views" in general seem to have a lot of real world support, while hard-core punitive prohibitionism has been a failure in every way pretty much everywhere it's been tried... that is unless your goal is to imprison a large number of people, perpetuate cycles of poverty and crime, and route lots of money to the prison and police/security industries. In that case they're a big success. At this point I consider prohibitionism to be a crackpot view. I get the sense that with at least some people prohibitionism is supported as an indirect way to persecute minority groups. It was true back when the "war on drugs" was pursued, and I think it's still true today.

There are cases where certain liberal positions do not have real-world support and basically don't work, but this isn't one of them. None of our over-arching political bias frameworks (right, left, etc.) jive perfectly with reality.


> lso, afaik, Bruce Alexander had a hypothesis about drug addiction, designed an experiment to prove his hypothesis. Performed the experiment, measured the results and found that they confirmed his hypothesis.

Uh, what? This is the definition of science.

> Extraordinary claims requires extraordinary proofs. That mice wouldn't become addicted to morphine is most certainly an extraordinary claim.

So says you. Also, extraordinary claims require the same proof as any other claim, that oft cited quote makes no sense. Labeling something 'extraordinary' inherently shows that you have a bias, not the person questioning the claim.


An extraordinary claim is a claim with a low prior probability. Observation of various approaches to drug policy and the associated levels of drug addiction don't seem consistent with the Rat Park hypothesis being an extraordinary claim.


> That mice wouldn't become addicted to morphine is most certainly an extraordinary claim.

That is evidence for the claim that addiction has become unassailable fact. Regurgitated without critical thought.


If the rats had prefered the morphine solution it would have suggested the hypothsis was false, so it was a reasonable experiment. If other labs attempted the same experiments and were unable to reproduce the same results that would discredit the original but a lack of attempts to reproduce the results does not say anything about the original.


> It comes up a lot, maybe because the result is kind of positive and aligns well with HN's crowds drug liberal views?

Are there people who have liberal views on drugs because they don't believe that addiction is real or that drugs are often harmful? I have views about drugs which would probably be considered liberal, but they have nothing to do with whether drugs are addictive or harmful.

> That mice wouldn't become addicted to morphine is most certainly an extraordinary claim.

Is it? I've never known anyone who has claimed to be addicted or even exposed to morphine. Having observed no actual evidence myself, why is one claim that morphine is addictive any more extraordinary than another claim that morphine isn't addictive?


> Having observed no actual evidence myself, why is one claim that morphine is addictive any more extraordinary than another claim that morphine isn't addictive

You haven't observed, but people in the Law and Health professions have and do on a regular basis. There have been plenty of studies on it. All consistent with its addictive nature. Your lack of personal experience does not make each side equally likely. One is clearly more consistent with reality, and the other is not. Thats why one claim is extraordinary, and the other is not.


I've seen several critical opinions of Stanford Prison and Milgram, and they laid out all their objections. Do you have a link to something that lays out specific objections to the way Rat Park was conducted?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: