
Engineering is the bottleneck in Deep Learning research - pramodbiligiri
http://blog.dennybritz.com/2017/01/17/engineering-is-the-bottleneck-in-deep-learning-research/?imm_mid=0ec94e&cmp=em-data-na-na-newsltr_20170125
======
saip
Agreed. The tooling around deep learning is not as mature as the tooling
around software development. There is a fair amount of engineering and grunt
work needed to even get started, let alone build on others' research. A few
problems from top of mind:

\- Setup: Installing DL frameworks, Nvidia drivers and CUDA is an exercise in
dependency hell. Trying to run someone's project, which has different
dependencies than what you have is difficult to get right. Docker images [1]
and nvidia-docker make this simple, but are still not the norm.

\- Reproducibility: This is big as Denny mentions. Folks still use Github for
sharing code. But DL pipelines need versioning of more than just code. It's
code, environment, parameters, data and results.

\- Sharing and collaboration: I've noticed that most collaboration on deep
learning research, unlike software, happens only when the folks are co-located
(e.g. part of the same school or company). This likely links back to
reproducibility, but there are not many good tools for effective collaboration
currently IMHO.

[1] [https://github.com/floydhub/dl-docker](https://github.com/floydhub/dl-
docker) (Disclaimer: I created this)

