
25 Years Of Conventional Evaluation Of Data Analysis Proves Worthless In Practice - kirubakaran
http://www.sciencedaily.com/releases/2008/09/080903101416.htm
======
rudyfink
Article was a thin on detail. What I take to be the actual paper
<http://www.signal.uu.se/Publications/pdf/p0804.pdf> [Cross-validation and
Bootstrapping are Unreliable in Small Sample Classification (PDF)] from
<http://www.signal.uu.se/Publications/ppapers.html> [the author's list of
journal publications].

~~~
hhm
Here is an explanation of cross-validation and bootstrapping:
[http://www.eecs.berkeley.edu/~pliang/cs294-spring08/lectures...](http://www.eecs.berkeley.edu/~pliang/cs294-spring08/lectures/cross/)

------
ced
_this methodology is worthless in practice when the total number of examples
is small in relation to the natural variation that exists among different
observations_

 _... it therefore needs to be immediately replaces by Bayesian methods, for
example, which can deliver reliable measures of the uncertainty that exists_

This looks like dogmatic madness, but maybe the article just makes them sound
bad. Can anyone comment on how reliable the uncertainty coming out of a
Bayesian algorithms is?

------
michael_dorfman
I, for one, welcome our new Bayesian overlords....

Seriously, though-- it is precisely these kinds of applications that Bayesian
methods are aimed at. Are they really that infrequently used in practice?

