The other responses to your question tell only part of the story. Yes, the academic groups that work in deep learning publish papers describing their methods. But these papers are rarely sufficient to be able to recreate the models they built.
There's a lot of other knowledge/expertise/intuition that's required to make working implementations. There have been some deep learning tutorials at recent conferences that might be more in-depth. (See my previous comment [1] for details.)
Another good way to learn is to look at some open source implementations, such as caffe from berkeley [2] or overfeat from NYU [3].
In addition to showing how to choose architectures or set params, they also have tricks for speeding things up. This is actually very important, as they can make orders of magnitude difference (training in hours vs days).
There's a lot of other knowledge/expertise/intuition that's required to make working implementations. There have been some deep learning tutorials at recent conferences that might be more in-depth. (See my previous comment [1] for details.)
Another good way to learn is to look at some open source implementations, such as caffe from berkeley [2] or overfeat from NYU [3].
In addition to showing how to choose architectures or set params, they also have tricks for speeding things up. This is actually very important, as they can make orders of magnitude difference (training in hours vs days).
[1] https://news.ycombinator.com/item?id=7742192
[2] http://caffe.berkeleyvision.org/
[3] https://github.com/sermanet/OverFeat