Hacker News new | comments | show | ask | jobs | submit login

The Geneticists: Use evolutionary principles to have a model organize itself

The Bayesians: Pick good priors and use Bayesian statistics

The Symbolists: Use top-down approaches to modeling cognition, using symbols and hand-crafted features

The Conspirators: Hinton, Lecun, Bengio et al. End-to-end deep learning without manual feature engineering

The Swiss School: Schmidhuber et al. LSTM's as a path to general AI.

The Russians: Use Support Vector Machines and its strong theoretical foundation

The Competitors: Only care about performance and generalization robustness. Not shy to build extremely slow and complex models.

The Speed Freaks: Care about fast convergence, simplicity, online learning, ease of use, scalability.

The Tree Huggers: Use mostly tree-based models, like Random Forests and Gradient Boosted Decision Trees

The Compressors: View cognition as compression. Compressed sensing, approximate matrix factorization

The Kitchen-sinkers: View learning as brute-force computation. Throw lots of feature transforms and random models and kernels at a problem

The Reinforcement learners: Look for feedback loops to add to the problem definition. The environment of the model is important.

The Complexities: Use methods and approaches from physics, dynamical systems and complexity/information theory.

The Theorists: Will not use a method, if there is no clear theory to explain it

The Pragmatists: Will use an effective method, to show that there needs to be a theory to explain it

The Cognitive Scientists: Build machine learning models to better understand (human) cognition

The Doom-sayers: ML Practitioners who worry about the singularity and care about beating human performance

The Socialists: View machine learning as a possible danger to society. Study algorithmic bias.

The Engineers: Worry about implementation, pipe-line jungles, drift, data quality.

The Combiners: Try to use the strengths of different approaches, while eliminating their weaknesses.

The Pac Learners: Search for the best hypothesis that is both accurate and computationally tractable.

See also http://www.kdnuggets.com/2015/03/all-machine-learning-models...

> It is common for people to learn about machine learning within one framework which often becomes there "home framework" through which they attempt to filter all machine learning. (Have you met people who can only think in terms of kernels? Only via Bayes Law? Only via PAC Learning?) Explicitly understanding the existence of these other frameworks can help resolve the confusion.




Nice list, has everything and the kitchen sink(ers). Two comments:

a) Inductive Logic Programming (and generally relational learning) is ideal for feature discovery and firmly in the symbolic camp, so the reliance of the Symbolists on hand-crafted features is not an absolute.

b) PAC learning should go under symbolic techniques, no? In fact, so should decision tree learning.

Also, I think it's obvious you can always unify and divide classifications like the above to come up with as many or as few "tribes" as you like. The real question is: are there really that many people who are wedded to their favourite technique, so much so that they won't ever try anything different?


a) perhaps "feature engineering" was not the right 2-gram. I was looking for the (cultural) difference in approaches. Logic programming starts with background knowledge, predicate logic, and hand-written rules on what is valid and what isn't. Deep learning is trying to learn this bottom-up. When DL finds a rule or fact it did so from data, not by using any pre-defined rules or facts.

b) if we apply hierarchical clustering, it would probably be a subset.

Anyway, this was more or less tongue-in-cheek. And yes, you could go on and on. I should have added "The Logicians", "The Game Theorists" and the NLP'ers solving object detection problems with visual bag-of-words. Also forgot to take a jab at business intelligence/operations research.

As for being wedded to a favorite technique, I think that is largely a problem for beginners (and PhD. students with a supervisor who can only think from within a certain framework). I myself may try SVM, but I rank it pretty low as an alternative.


a) Weelll, Ish :) DL really doesn't need any more direction than what's in the data, but with IPL (Inductive Logic Programming) you also don't _need_ to have any background knowledge. And it can totally discover its own features.

Anyway it depends a lot on the specific algorithm, for instance see Alignment-Baed Learning [1] and the ADIOS algorithm [2] for two examples of thoroughly symbolic grammar induction algorithms (though not quite ILP) that works on unannotated, tokenised text, so is entirely unsupervised.

And, if I may be so bold, my own Masters dissertation [2], an unsupervised graph induction algo that learns a Prolog program from unannotated data. You won't find evidence of that on my github page, but I've used my algorithm to extract features from text- as in word embeddings. Also, it's a recursive partitioning algorithm, so essentially an unsupervised decision tree learner, only of course it learns a FOL theory rather than propositional trees. My hunch is you could use decision trees unsupervised and let them find their own features, although that'll have to go under Original Research for now :)

Those just happen to be three algorithms I know well enough, but you can google around for more examples. In general: relational learning can do away with the need for feature engineering, it's one of its big strengths.

In fact, I'm starting to think that - unless DL is somehow magical and special - it should be possible to turn most supervised learners into unsupervised feature learners, by stringing together instances of an algorithm and having each instance learn on features the previous one discovered. Again- Original Research and big pinch of salt 'cause it's just a hunch.

[1] http://ilk.uvt.nl/menno/research/software/abl [2] http://adios.tau.ac.il/algorithm.html [3] https://github.com/stassa/THELEMA


Are you aware that you've made a lot of stuff for t-shirts, email forwards and memes? Be prepared to be supplemented, misquoted a lot, and attributed as "The Internet".




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: