Hacker News new | past | comments | ask | show | jobs | submit login

Why are RBMs so seldom used? What is their weak point? Based on Hinton's talk I thought it was a major breakthrough back then.



It was back then..we don't even support them in our framework anymore. Most folks use VAEs and GANs for generative models now. In general, they are just more stable.


How about DBN? It is also a generative model but why now rarely used?

ML researchers following trends, like doing end-to-end CNN for whatever problems, and using this LSTM and that GRU and that latest architectures; in similar fashion like web devs picking and dropping javascript frameworks.


A DBN is equivalently just stacked RBMs. Most folks just use stacked autoencoders for the exact same reasons you'd use a DBN.

Most "off the shelf" work people do is fairly simple transfer learning on an imagenet cnn.

That or if they do generative, most of the hype is focused on GANs now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: