

A Beginner’s Guide to Restricted Boltzmann Machines - vonnik
http://deeplearning4j.org/restrictedboltzmannmachine.html

======
saosebastiao
Asking here because I don't really know of a better place. Does anybody
involved in deep learning have any use cases _other than image-based use
cases_ where deep learning has been a measurably better option than other
techniqes?

I don't do any image-related tasks, but I use machine learning extensively and
have never been able to successfully break away from my big three of Naive-
Bayes, SVMs, and Random Forests. I've tried some of the more established deep
learning techniques (LeNet,RBM,etc), but can't seem to find anything that
works better than one of those three. Is deep learning an exclusively image-
related technique?

~~~
teraflop
You can think of deep learning as a way to automate feature extraction on very
high-dimensional data, where the features are complex enough that they're
difficult or impossible to specify by hand. This includes things like:

\- objects in an image/video

\- phonemes in raw audio

\- events in multidimensional time series

\- grammatical/semantic structures in unstructured text

\- strategic features in a board game position (e.g. Go)

These kinds of inputs are difficult to handle with traditional ML techniques.
But if your data already looks more like rows in a table, with simple,
semantically meaningful features, deep learning isn't likely to buy you that
much.

------
benanne
I feel like this guide comes about 5 years too late - RBMs as density models
have been shown to be relatively weak, except in the case of binary data. For
continuous data, you can often do better even with a simple Gaussian mixture
model. Other than that they are cumbersome to train (the gradient needs to be
approximated), and the for continuous variants training can be unstable unless
you use very low learning rates.

They were very popular for unsupervised pre-training a while ago, but the
utility of pre-training has greatly diminished. Unless you have a ton of
unlabeled data and very few labels, it's not worth the effort. And if it is,
you are better off using autoencoders for pre-training anyway. They are
conceptually much simpler and easier to understand, and you'll get roughly the
same results.

If you want to get started with deep learning, focus on feed-forward and
recurrent neural nets instead, you'll get much more useful knowledge out of
that. For most of the common deep learning use cases there is no need to
bother with RBMs anymore.

------
deepnet
The recent work at Berkeley using deep nets for domestic robot control is
interesting.

Deep nets turn camera pixels into motor torques.

The robots are quickly trained to a new task and once trained the solutions
are robust to changes - no camera calibration is required.

[http://rll.berkeley.edu/deeplearningrobotics/](http://rll.berkeley.edu/deeplearningrobotics/)

