

How do you measure the quality of a classifier (hint, not by "accuracy")? - jmount
http://www.win-vector.com/blog/2009/11/i-dont-think-that-means-what-you-think-it-means-statistics-to-english-translation-part-1-accuracy-measures/

======
apu
That's a lot of text to get to ROC curves...I'd rewrite down to essentials and
get to the curves ASAP.

Fundamentally, you have a threshold. Things above are classified as "positive"
and below as "negative". At different threshold values, you get some number of
correct positives (was actually positive) and false positives (was actually
negative). That's what the ROC curve shows. You want the curve to be as close
to the top-left (in this case) corner as possible.

~~~
leecho0
replace threshold with parameter,

classifiers may have different parameters other than threshold

Area under ROC curves (AUC) are used to evaluate how good the model is
considering many parameters can be chosen for it.

However, when you actually want to make a prediction, you still need to pick a
value for the parameter (a point on the ROC curve). So F-measure is often more
useful for evaluating the predictions. While AUC are more useful at evaluating
classifiers.

------
jmount
Our start on a statistics to English dictionary.

~~~
nobody_nowhere
Thanks, nicely done.

