
Warning Signs in Experimental Design and Interpretation (2007) - chollida1
http://norvig.com/experiment-design.html
======
throwaway6356
> Lack of Double-Blind Studies

"We know there is a placebo effect wherein patients do better when they are
told they are receiving a treatment: the patients' expectations play a role in
their recovery. To make sure we are studying the effect of the treatment
itself and not the patients' expectations, it is better to give all patients
the same expectation. So we tell them, for example, "take this pill, it might
be experimental drug X or it might be a sugar pill." The double-blind part is
important because we don't want the experimenters to subconsciously tip off
the subjects as to what group they are in, nor to treat one group differently
than the other, nor to analyze the results differently."

one thing that always strikes me about double-blind placebo based studies is
that they are often testing a substance with a detectable physiological effect
against an inert substance with no detectable physiological effect.

this methodology seems fundamentally flawed. if you want a true test you would
need to compare against a placebo that has a similar (or at least detectable)
physiological effect but is not expected to produce any efficacious outcome.
otherwise the person getting the real drug feels the physiological effect and
gets a placebo effect while the person getting the placebo does not.

~~~
Larrikin
It seems that it would be unethical, for example, to make someone
intentionally sick to mimic the side effects of the drug with none of the
expected positive results.

~~~
jfoutz
Yes. Yes it would. But comparing against baby aspirin might be worthwhile.
There is some (safe?) effect beyond sugar pills.

Remember, the patients don’t know each other, and don’t compare symptoms.
Hopefully.

------
richardhod
(2007)

Most successful prior posting;
[https://news.ycombinator.com/item?id=7598581](https://news.ycombinator.com/item?id=7598581)

------
rwilson4
This is all good info; I'll add that a confidence interval on the effect
size[0] is more informative than a p-value. (This is subtly different than a
confidence interval on the response in each group, which is discussed in the
article: Warning Sign I5: taking p too seriously.)

[0] "It's the effect size, stupid"
[https://www.leeds.ac.uk/educol/documents/00002182.htm](https://www.leeds.ac.uk/educol/documents/00002182.htm)

~~~
b_tterc_p
I like this. But...

“As a further example he states that the difference in IQ between holders of
the Ph.D. degree and 'typical college freshmen' is comparable to an effect
size of 0.8.”

Boy, there is a lot to unpack there.

------
pierrebai
I slightly dislike that his opening paragraphs are wrong. The statement "The
group with treatment X had significantly less disease (p = 1%)" does not mean
"treatment X it will prevent disease." It only means that there were less
disease with a significant probability of being true. IOW, the study might be
99% sure it reduces the disease by 5%.

~~~
pooppaint
I’m guessing English is not your first language. No where does the author make
the implication you state, in fact it asserts the contrary.

