Hacker News new | comments | show | ask | jobs | submit login

I agree that neural nets are state-of-the-art and do quite well on certain types of problems (NLP and vision, which are important problems). But a lot of data is structured (sales, churn, recommendations, etc), and it is so much easier to train an xgboost model than a neural net model. You need a very expensive computer or expensive cloud computing to train neural nets, and even then it is not easy. Ease of implementation is an important factor that gets overlooked in academia. And on non-NLP and non-image datasets, usually the single best Kaggle model is an xgboost model, which was probably developed in 1/10th the time it took to make a good neural net model. Xgboost has come a long way since it was first introduced, with early stopping being an example of a significant improvement.



How can you say that ease of implementation is overlooked in academia, when academia created the exact tools your are speaking of?


Academia does not have to run the model in production. It has few computational constraints, and most datasets do not have a feedback loop, requiring combating drift, debugging, and retraining. Papers are often accepted when they equal or beat state-of-the-art. Not many academics have to deal with the business side of running models in prod.

All of this leads to ease of implementation being overlooked. Especially on NLP, you see a lot of overengineering with deep neural nets (where the feature engineering is hidden inside the architecture). These models are hard to implement/reuse.

But yeah: academia/theoretical machine learning creates the very tools for applied machine learning.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: