Hacker News new | past | comments | ask | show | jobs | submit login
A Theory of Universal Learning [pdf] (princeton.edu)
93 points by emre 18 days ago | hide | past | favorite | 3 comments



Calling this 'universal learning' is a stretch (and in HN speak 'click-bait'). The paper only talks about a particular subclass of learning algorithms in supervised learning domain within PAC framework.

Specially they talk about learning algorithms that minimize some error function from training examples. That's not how learning happens in living organisms. Therefore calling it 'universal' is an unwarranted generalization, kind of like 'theory of everything'.

A more appropriate name would be 'Distribution-dependent Supervised PAC Learning' or something along those lines. It's a solid work which addresses a particular niche of a particular theory.


"Universal consistency" is longstanding terminology for "error goes to zero under every possible distribution". So it seems reasonable to use the word universal in this way, referring to a "for every distribution" type of bound.


I see where you're coming from, but this is terminology that is pretty clear within the field of statistical learning theory, which this paper's audience. Within learning theory I think this paper is actually quite important and cool, as its name suggests, so I would hardly call it "clickbait". At least, looking at the actual contents, I agree with their reasoning for calling this "universal" "learning".

As a very high level summary, they look at how many samples are required to learn hypothesis classes, allowing this number to vary depending on the distribution to a limited degree (hence universal), rather than existing models that are interested in a guarantee that holds uniformly for all distributions. I say a limited degree because they actually make sure the shape of the learning curve is the same regardless of the distribution, they just allow the steepness to change, basically. PAC learning is really just supervised learning of binary classifiers as you say (although it has generalizations to multiclass, regression, unsupervised learning), and for convenience PAC learnable hypothesis classes are often just abbreviated to "learnable" hypothesis classes.

This is made more awkward by the fact the paper is so new, it has not been published in any conference or journal to my knowledge (if it had been, I would have recommended affixing the name to the title). So two options here in my eyes: 1. adjust the name of the HN submission to make clear this is in the field of statistical learning theory 2. Change the name of the HN submission to something like "A Theory of Sample Complexity of Universal Learning" or "A Theory of Learning Curves and Universal Learning Rates", but then this would actually be a misrepresentation of the actual content of the submission




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: