
Perceptrons – the most basic form of a neural network - kawera
https://appliedgo.net/perceptron/
======
pessimizer
This is a question for anyone who knows more (or reads more) about neural
networks than me. My problem with the perceptron since I was a kid is that it
doesn't seem to model how neurons work very well at all. It creates a neuron
that exists in a timeless place; kind of a homo economicus for cognition.

My understanding of neurons: The signals arrive at the dendrites at different
times, as soon as a signal arrives its strength decays steadily, and when the
sum of decaying signals and an additional signal reach a threshold, the axon
fires, getting rid of all energy and beginning the process again (creating the
largely irregular pops that everybody is used to listening to.) Aside from
that, there are effects from other bodies and chemical concentrations between
neurons that are mostly unknown (that I get leaving out as not central to the
process.)

My question: is there any paper that explains how that process, which seems
pretty easy to model, can be mathematically reduced to perceptrons (hopefully
comprehensible to the educated layman) which are just addition and a
condition?

~~~
danielmorozoff
First off preceptrons are very simplified models, a first pass really at
modeling brains in silico decades ago.

There is a whole field of theoretical neuroscience dedicated to this problem,
and have made quite a bit of progress, still a bunch of work left to be done.

The classical introductory text on the topic is the book by Peter Dayan and
Larry Abbott called computational and mathematical modeling of neural systems.

~~~
pessimizer
[deleted]

Thanks for the reference.

edit: Thanks very much; if that doesn't contain what I'm looking for, it'll
give me further places to look.

------
gavanwoolery
One thing the article failed to mention - which I found most interesting, is
that a perceptron can function like a NAND gate, which is the building block
of all other gates (AND, OR, XOR, etc) and thus can replicate the
functionality of any logical array.

~~~
supercarrot
Another interesting thing is that perceptron can approximate any non-linear
function with just a single hidden layer.

[http://cstheory.stackexchange.com/questions/17545/universal-...](http://cstheory.stackexchange.com/questions/17545/universal-
approximation-theorem-neural-networks)

------
peternilson
I implemented the code produced in the tutorial in python for anyone who's
interested

[https://github.com/petenilson/pyceptron](https://github.com/petenilson/pyceptron)

------
appleflaxen
I'm getting "refused to connect"

DownForEveryone says it's down too.

anyone have a mirror?

