Anyway, you don't think power analysis à la Cohen is useful?
Also, this isn't really about what I think, rather I would hope people check the Fisher 1955 ref and go from there.
What I think though is this whole idea of testing vague/vagrant hypotheses (eg the example we used here) is wrong in the worst way possible. The null hypothesis should be deduced from some theory, or at least correspond to what you care about. I have shared this paper on the site many times, I think it should be standard reading in high school: http://www.fisme.science.uu.nl/staff/christianb/downloads/me...
Sample size, effect size & power are related concepts in the context of power analysis -- see also Cohen's "A primer on power", which is available on the Internet. The concept of power has nothing to do with "degrees of evidence" or vague hypotheses.
Sorry for the miscommunication. The point is that power is a Neyman/Pearson concept, Fisher said it didn't make sense. On the other hand a gradient of evidence is a Fisherian concept, Neyman/Pearson said that didn't make sense.
What people have been teaching as stats is a mismash of the two that makes sense to no one who thinks these types of things through. Gigerenzer reviews this strange phenomenon and offers some entertaining commentary, it is a decent starting point.
Yes it does. To properly assess the probability of incorrectly failing to reject a hypothesis you need to know how likely the data would be under various rival hypotheses. This depends on the precision of the various hypotheses. This is explained by Fisher in my original ref.
is the author using a null value to inform this perception?
Clearly he doesn't think it had exactly zero effect, since it affected him!
"Estimating parameters from sample" (on the right) would be his observation that there was little discernible effect. Thinking that 1000 reprints of the paper would have a larger effect on practice would more correspond to "theory" (on the left), although that is a pretty vague one.