
A First Course in Design and Analysis of Experiments (2010) [pdf] - mindcrime
http://users.stat.umn.edu/~gary/book/fcdae.pdf
======
patagonia
When I find an interesting pdf online and want to make 2 hours disappear, I
just add "filetype:pdf site:" and throw it in the google search bar.

filetype:pdf site:users.stat.umn.edu/~gary/

------
nonbel
>"One question of interest is whether the times are the same on average for
the two workplaces. Formally, we test the null hypothesis that the average
runstitching time for the standard workplace is the same as the average
runstitching time for the ergonomic workplace."

Who would this be of interest to? I would never expect two workplaces to have
exactly the same "runstitching time" (or anything else).

Also this is misstated, you would be testing if the two datasets are samples
from distributions with the same average. Ie, the actual measured averages are
not expected to be the same.

~~~
dcuthbertson
The "workplaces" could be just a large group from the same workplace was split
into two groups, one for control and one to see the effects of ergonomics on
their productivity. If that's the case, the variance in initial productivity
would be less than if the two groups were taken from different factories.

Whether or not that's true, it reminds me of the experiments done near
Hawthorne, Illinois around the time of the Great Depression to improve worker
productivity by changing their environment. The workers output improved almost
regardless of environmental changes. The conclusion was workers output
improved because someone was paying attention to them. Henry Landsberger
analyzed the experiments in the 1950s and coined the term the Hawthorne
Effect. (Edit for grammar, spelling, and coffee)

~~~
nonbel
>"workers output improved because someone was paying attention to them"

How much did it improve? All this test tells you is that there was some
difference, it could be minuscule.

~~~
rwilson4
One of the desired outputs of an experiment analysis is an estimate of the
size of the treatment effect, typically in the form of a confidence interval.
Studies which only report statistical significance (and there are many of
them) are of limited utility since you can have a tiny effect that is
statistically significant, just as you can have a large effect that is not
statistically significant!

