This looks like an interesting project. I'd take the accuracy results with a pinch of salt because growing deeper trees often improves accuracy and in the test scenario xgboost is handicapped by limited depth. As the author says on reddit its difficult to do an apples to apples comparison of the two methods directly, because their approach to growing trees is very different [a bit like DFS vs BFS]
The thing that is more relevant for 'real-world' data is whether this library supports categorical features at all. The answer seems to be that it doesn't (then again neither does xgboost).
The text in the Parallel experiments section [1] suggests that the result on the Criteo dataset was achieved by replacing the Categorical features by the CTR and the count.
[1] From https://github.com/Microsoft/LightGBM/wiki/Experiments#paral...:
"This data contains 13 integer features and 26 category features of 24 days click log. We statistic the CTR and count for these 26 category features from first ten days, then use next ten days’ data, which had been replaced the category features by the corresponding CTR and count, as training data. The processed training data has total 1.7 billions records and 67 features."
I'd love to have a python interface for this, just drop a pandas frame, maybe scikit-learn interface with fit/predict.
Saving/Loading models...
This will definitely boost adoption.
The thing that is more relevant for 'real-world' data is whether this library supports categorical features at all. The answer seems to be that it doesn't (then again neither does xgboost).
The text in the Parallel experiments section [1] suggests that the result on the Criteo dataset was achieved by replacing the Categorical features by the CTR and the count.
[1] From https://github.com/Microsoft/LightGBM/wiki/Experiments#paral...: "This data contains 13 integer features and 26 category features of 24 days click log. We statistic the CTR and count for these 26 category features from first ten days, then use next ten days’ data, which had been replaced the category features by the corresponding CTR and count, as training data. The processed training data has total 1.7 billions records and 67 features."