
How to explain gradient boosting - parrt
http://explained.ai/gradient-boosting/index.html
======
parrt
Gradient boosting machines (GBMs) are currently very popular and so it's a
good idea for machine learning practitioners to understand how GBMs work. The
problem is that understanding all of the mathematical machinery is tricky and,
unfortunately, these details are needed to tune the hyper-parameters. (Tuning
the hyper-parameters is required to get a decent GBM model unlike, say, Random
Forests.) Our goal in this article is to explain the intuition behind gradient
boosting, provide visualizations for model construction, explain the
mathematics as simply as possible, and answer thorny questions such as why GBM
is performing “gradient descent in function space.” We've split the discussion
into three morsels and a FAQ for easier digestion. Written by Terence Parr and
Jeremy Howard.

