
Show HN: Perceptron – Artificial Neural Network Builder - wyliec
https://github.com/casparwylie/Perceptron
======
nerdponx
Isn't the name "perceptron" going to be a little confusing in this context?

~~~
msiggy
That was my first thought as well

~~~
wyliec
Why exactly?

~~~
nerdponx
[https://en.m.wikipedia.org/wiki/Multilayer_perceptron](https://en.m.wikipedia.org/wiki/Multilayer_perceptron)

~~~
wyliec
...Hence why I've called my "feedforward artificial neural network model that
maps sets of input data onto a set of appropriate outputs" builder
Perceptron...

~~~
nerdponx
This would be like calling a linear model builder "Regression"

~~~
wyliec
No need to take the name so literally though - it's rare names are taken
literally at all!

------
neoncontrails
I respect the choice not to use a framework. I keep flirting with the idea of
posting some of my trained char-rnn models online, in hopes people find them
amusing and possibly share theirs in turn. Trained RNN models aren't small,
but they're not huge either. 20-30mb generally. In other words it wouldn't be
prohibitive to accumulate a collection of trained, GNU models, if the outputs
would be useful and interesting to people.

Out of curiosity how are you serializing the output models? Are the files
memory-mapped bytecode, or does the training encode a nice, clean data
structure?

~~~
wyliec
when you say output models are you referring to the optimized ANNs, or the
organisation of output neurons?

------
ivan_ah
Very nice idea.

What is the objective function (loss function) that `back_propagate` is
optimizing exactly? I'm new to the world of NN, so really excited to see NN
code that works from first principles. Reminds me of
[https://github.com/mnielsen/neural-networks-and-deep-
learnin...](https://github.com/mnielsen/neural-networks-and-deep-learning)

Perhaps a little more detail about the math behind the code would be in order?
I'd be happy to help out.

Another cool direction would be to convert the python code to cython and see
how much faster it will run...

~~~
wyliec
Cheers for feed back. The cost/error function is simply the 1/2(a-y)^2 ...(I
think its called the square cost function). I wrote all the code myself (no
libs) so people like yourself can see all steps required to train an ANN. I
was thinking of getting things working for GPU, which should even make cython
look slow. All depends on how much attention I get. Right now it's still got
quite a few bugs, and in general is under done.

------
partycoder
For didactic purposes there's also:

\- [http://www.ra.cs.uni-tuebingen.de/software/JavaNNS/](http://www.ra.cs.uni-
tuebingen.de/software/JavaNNS/)

\-
[http://www.cs.waikato.ac.nz/ml/weka/](http://www.cs.waikato.ac.nz/ml/weka/)

~~~
wyliec
These are most likely more matured than Perceptron at the moment. More work to
do!

~~~
partycoder
Yes. But mature projects also have larger codebases and many knobs to play
with. That can make it hard to serve as introductory stuff. Focused projects
have their purpose too... like what Minix is to Linux. So it's fine.

Other software to play with neural networks can be R and Scilab, not to
mention others like: Tensorflow, Torch, Theano, Pandas, Scikit-learn, Caffee
and many others. All of them free, great communities and videos can be found
in YouTube with nice intros.

Another way to learn can be just grab Python and Numpy and roll your own
multilayer perceptron with backpropagation, then play with other concepts.

Jupyter notebooks are also a great way to explore what's out there.

------
rafael859
This looks great, but I think that it would be even better if there was some
way of bootstrapping it to existing libraries, say for example keras. I don't
know how easy this would be, but I believe that it would definitely be very
useful. Even without this capability, it looks like a great learning tool!

~~~
wyliec
great to hear! The reason I don't use libraries is because I want the code, in
itself, to be educational. So people can read the code and see what algorithms
are being used for every step. This project is less about trying to compete
with amazing performance, and more of an educational play ground.

~~~
jamesmcintyre
Thanks so much for this! I'm just getting started with ML and deep learning so
anything with visuals and "knobs"/"dials" with real-time feedback to visually
show relationships of changes and their effects is a huge eye opener!

~~~
wyliec
Really happy to hear it's living up to its purpose.

