Hacker News new | past | comments | ask | show | jobs | submit login

To be fair linear models are statistical models and many of those are explainable.

Random Forest is not explainable per say... A decision tree is explainable but an ensemble of it is not to say explainable in at least statistical models sense.

In linear models the coefficients give you a linear numerical sense and also linear associations where as Random Forest give you OOB and feature importance which isn't as clear cut. It also really dependent on the quality of data and most machine learning that aren't statistical models depend heavily on a large amount of data to overcome the model's weakness (random forest being selection bias).

Statistic, to toot my horn a little, have very vast breath and depth in inferences/explainable with huge literatures on it including different fields (e.g. econometrics with time series, biostatistic with longitudinal, survival analysis, etc..). There are also techniques on building parsimonious models too like logistic regression with purposeful selection by Dr. Lemeshow and Dr. Hosmer. And different ways to build inference models versus predictive/forecast model.

I think many people know a general idea but not in depth because I mean there's a field with many deep rabbit holes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: