Hacker News new | past | comments | ask | show | jobs | submit login

I see where you're coming from, but this is terminology that is pretty clear within the field of statistical learning theory, which this paper's audience. Within learning theory I think this paper is actually quite important and cool, as its name suggests, so I would hardly call it "clickbait". At least, looking at the actual contents, I agree with their reasoning for calling this "universal" "learning".

As a very high level summary, they look at how many samples are required to learn hypothesis classes, allowing this number to vary depending on the distribution to a limited degree (hence universal), rather than existing models that are interested in a guarantee that holds uniformly for all distributions. I say a limited degree because they actually make sure the shape of the learning curve is the same regardless of the distribution, they just allow the steepness to change, basically. PAC learning is really just supervised learning of binary classifiers as you say (although it has generalizations to multiclass, regression, unsupervised learning), and for convenience PAC learnable hypothesis classes are often just abbreviated to "learnable" hypothesis classes.

This is made more awkward by the fact the paper is so new, it has not been published in any conference or journal to my knowledge (if it had been, I would have recommended affixing the name to the title). So two options here in my eyes: 1. adjust the name of the HN submission to make clear this is in the field of statistical learning theory 2. Change the name of the HN submission to something like "A Theory of Sample Complexity of Universal Learning" or "A Theory of Learning Curves and Universal Learning Rates", but then this would actually be a misrepresentation of the actual content of the submission




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: