Gradient Boosting models are another variant of ensemble models, different from Random Forest. In Random Forest (RF) models, goal is to build many-many overfitted models each on subset of training data, and combine their individual prediction to make final prediction. In Gradient Boosting models, goal is to build series of many-many underfitted models, each bettering errors of previous model, and cumulative prediction is used to make final prediction.