
Perceptron: the main component of neural networks - serdata
https://www.neuraldesigner.com/blog/perceptron-the-main-component-of-neural-networks
======
charles-salvia
The history of the perceptron is interesting. It was invented all the way back
in 1957, and seemed like a promising way to learn a function. In fact, it was
so promising, that it was actually intended to be implemented in hardware as a
physical array of electronic perceptrons. But a seminal paper published in the
60s by Marvin Minsky proved that the perceptron was fatally flawed: it could
only learn linearly separable patterns, meaning it couldn't even learn the XOR
function.

So essentially neural network research completely came to a halt until a
decade later when the idea of a multi-layer perception, which _can_ learn non-
linearly separable functions, (and is now the foundation of all modern ANN
architectures) became more widely appreciated. (Actually, the original Minsky
paper had mentioned that a multi-layer perceptron would overcome the
limitations of the single-layer model, but the blow to the original perceptron
was apparently so devastating that nobody cared.)

