
AI and Deep Learning in 2017 – A Year in Review - MrQuincle
http://www.wildml.com/2017/12/ai-and-deep-learning-in-2017-a-year-in-review/
======
hbt
the fact this thread got over 100 points without a comment or discussion
proves what I suspected.

most of us are observers when it comes to this tech. sure, there is some quick
tutorial to build a "learning model" somewhere or read an article to grasp
what is being discussed...

but when it comes to contributing something remotely significant, we got
nothing. whether it is due to lack of resources (giant datasets and computing
power) or lack of knowledge and experience to even theorize something
plausible.

this is even more frustrating knowing how important of a role this tech will
play in the future.

being a passive observer is not enough.

I'm looking forward to 2018 AI summary.

~~~
rinspy
The summary illustrates another point - the overwhelming majority of DL
applications are on visual and sound datasets. Outside of the Facebook/Google
bubble, images and sounds are not that important. On Kaggle, for example,
gradient boosted trees tend to dominate the structured data challenges, but
the gradient boosted tree "revolution" is happening unnoticed.

~~~
mattnewton
I think the combination of a) more processed datasets than google/Facebook can
sometimes have, b) smaller cpu/gpu resources that google/Facebook have, and c)
the incentive to squeeze every last bit of log loss out, all encourages you to
ensable all your independently trained models under a gradient tree.

------
naveen99
If hinton pivots too like Jeremy Howard to cryptocurrencies, we will know a
new ai winter is upon us.

