Hacker News new | past | comments | ask | show | jobs | submit login

I find it odd that Ordinary Least Squares is missing from the map, even though it's probably more popular than all the other methods in that entire map combined.

However, it is mentioned at the top here: http://scikit-learn.org/stable/supervised_learning.html.




OLS is a special case of ElasticNet, Lasso, and ridge regression with the regularization parameters set to zero. (The latter two are also special cases of ElasticNet with one of the two regularization parameters set to zero.) In the presence of many predictors or multicollinearity among the predictors, OLS tends to overfit the data and regularized models usually provide better predictions, although OLS still has its place in exploratory data analysis.


To add to simonster's comment [1]: confusingly, OLS is also morally equivalent to what the map calls "SGD regressor" with a squared loss function[2]. It is also nearly equivalent, with lots of caveats and many details aside, to SVR with a linear kernel and practically no regularization.

So yeah, it is confusing. There is a lot of overlap between several disciplines and it's still an emerging field.

[1] https://news.ycombinator.com/item?id=7713940

[2] http://scikit-learn.org/dev/modules/sgd.html#regression


It's also odd there's no mention of Logistic Regression.


Yeah the nomenclature is not very rigorous and there is some overlap depending on how you look at it but, roughly and without being pedantic, the closest in that map would be SGD with a logistic loss function[1].

[1] http://scikit-learn.org/dev/modules/sgd.html#classification




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: