Hacker News new | past | comments | ask | show | jobs | submit login

a) perhaps "feature engineering" was not the right 2-gram. I was looking for the (cultural) difference in approaches. Logic programming starts with background knowledge, predicate logic, and hand-written rules on what is valid and what isn't. Deep learning is trying to learn this bottom-up. When DL finds a rule or fact it did so from data, not by using any pre-defined rules or facts.

b) if we apply hierarchical clustering, it would probably be a subset.

Anyway, this was more or less tongue-in-cheek. And yes, you could go on and on. I should have added "The Logicians", "The Game Theorists" and the NLP'ers solving object detection problems with visual bag-of-words. Also forgot to take a jab at business intelligence/operations research.

As for being wedded to a favorite technique, I think that is largely a problem for beginners (and PhD. students with a supervisor who can only think from within a certain framework). I myself may try SVM, but I rank it pretty low as an alternative.




a) Weelll, Ish :) DL really doesn't need any more direction than what's in the data, but with IPL (Inductive Logic Programming) you also don't _need_ to have any background knowledge. And it can totally discover its own features.

Anyway it depends a lot on the specific algorithm, for instance see Alignment-Baed Learning [1] and the ADIOS algorithm [2] for two examples of thoroughly symbolic grammar induction algorithms (though not quite ILP) that works on unannotated, tokenised text, so is entirely unsupervised.

And, if I may be so bold, my own Masters dissertation [2], an unsupervised graph induction algo that learns a Prolog program from unannotated data. You won't find evidence of that on my github page, but I've used my algorithm to extract features from text- as in word embeddings. Also, it's a recursive partitioning algorithm, so essentially an unsupervised decision tree learner, only of course it learns a FOL theory rather than propositional trees. My hunch is you could use decision trees unsupervised and let them find their own features, although that'll have to go under Original Research for now :)

Those just happen to be three algorithms I know well enough, but you can google around for more examples. In general: relational learning can do away with the need for feature engineering, it's one of its big strengths.

In fact, I'm starting to think that - unless DL is somehow magical and special - it should be possible to turn most supervised learners into unsupervised feature learners, by stringing together instances of an algorithm and having each instance learn on features the previous one discovered. Again- Original Research and big pinch of salt 'cause it's just a hunch.

[1] http://ilk.uvt.nl/menno/research/software/abl [2] http://adios.tau.ac.il/algorithm.html [3] https://github.com/stassa/THELEMA




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: