
Lets Use ML for Insights - polm23
https://ehudreiter.com/2020/03/20/lets-use-ml-for-insights/
======
madhadron
This feels like the end of the arc beginning with the nonsensical article "The
End of Theory"[1] in 2008. The language game of working with models and
concepts is far richer than just "input in, output out."

[1]: [https://www.wired.com/2008/06/pb-
theory/](https://www.wired.com/2008/06/pb-theory/)

------
drongoking
Knowledge discovery used to be a big part of ML, and the KDD conference
included papers on association rules, clustering, rule learning, interpretable
classifiers, etc. In my experience, even with predictive analytics the
knowledge discovered along the way was often as valuable as the final
application --- although it's easier to sell a predictive capability than
"we're going to find something interesting in your data."

------
pedalpete
I had assumed that this was a solved problem, am I wrong? Isn't this what
quants are doing with the markets? Watching x number of inputs to predict the
probability of y output?

Can somebody clarify what I'm missing? I'd love to find out more about this
area of ML, is there a name for it?

~~~
jw887c
I view it as:

Explainable models = statistical models (regression / multilevel models).

Black box models = machine learning (tree based models).

AI = black box models for computer vision / NLP.

~~~
mistrial9
hmm Black box models - refers to NeuralNets that build iteratively
"unsupervised" .. confusion reading this because, tree-based models sounds
similar to other, non-NN models such as RandomForest.. a RandomForest model is
reproducible and may be very explainable, depending on the way the setup was
done "and other factors"

AI = long history of multiple meanings of this term, but a common one is "get
computers to solve a question that used to be only solveable by a human".. not
at all specific to vision 'slash' natural language processing

