Hacker News new | past | comments | ask | show | jobs | submit login
How to implement a neural network: Part 1 (peterroelants.github.io)
66 points by dil8 on June 13, 2015 | hide | past | web | favorite | 10 comments



I'm doing the Coursera machine learning course. You guys can try it too : https://www.coursera.org/learn/machine-learning . It's very relevant to this topic. Good explanation is presented here too.


AFAIU, it's a tutorial on old neural networks, not on deep learning.


What do you mean by old neural networks?


You know - the ones you had to start with a crank.


You made my day :)


I mean neural network algorithms which were used prior to discovery of deep learning methods.


The algorithms haven't (fundamentally) changed.


They have changed, otherwise we didn't have a breakthrough.


Barely at all. Long Short-Term Memory network blocks, for example, are basically perceptrons with feedback controlled by two or three completely conventional non-linear nets running off the same inputs.

Deep learning networks are an assembly of existing connectionist parts, enabled by Moore's Law. There's nothing bad about this: the progress being made is real and substantial, but it's being made in ways that are true to Edison's prescribed ratio of inspiration and perspiration, although a lot of the perspiring is being done by machines.

The architectural tweaks, while small, are still very important. But the core learning algorithm is still backprop deployed over the new architectures.


I agree, I've seen someone go as far as calling DNN's "just a big backpropagated multilayer perceptron, with absurd ammounts of neurons and layers, enabled only by the processing power of GPUs". It's way too simplified, but I did see the point behind the argument.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: