
A Non-Mathematical Introduction to Using Neural Networks - Anon84
http://www.heatonresearch.com/content/non-mathematical-introduction-using-neural-networks
======
sqrt17
Hmm... how should I put it?

I don't see why anyone non-mathematical would want using neuronal networks in
the first place. There's lots of machine learning tools that are designed to
work out of the box (decision trees, SVMs) whereas neuronal networks on the
other hand give you greater flexibility but make it radically easier to shoot
yourself in the foot if don't understand the whole thing. And "the whole
thing" involves a gobload of math - including multidimensional calculus,
little bits of probability theory, and numerical optimization.

That said, it's probably ok if you just want to get your feet wet to see what
it's all about. There's even (decent) libraries for Python:
<http://deeplearning.net/software/theano/> <http://code.google.com/p/pynnet/>

Matching tutorial: <http://deeplearning.net/tutorial/>

~~~
retube
I don't think you need to know all the math behind NNs in order to
successfully learn how to use one. I certainly don't and I've been using the
encog library (by Heaton) for a while with great success. And you definitely
do __not __need to be a mathematician to need one.

Yes, there are some general principles and concepts you need to
know/understand, but nothing particularly deep.

Edit: Actually I'd highly recommend encog: it's very easy to use, it's
extremely well documented and for those of us that do not have a formal
education in NNs it's a great place to get started.
<http://www.heatonresearch.com/encog>

~~~
brent
I'm not sure you've actually addressed his concern. I think the ability to use
one (in an engineering sense) and get seemingly good results is separate from
the issue of fully utilizing them. You are likely better off using a much more
hands off tool that requires little knowledge of the underlying mechanism (to
his point, d trees and svm).

------
jules
The mathematical explanation is surprisingly simple. A 2 layer neural network
consists of 2 matrices A and B and an activation function f. The neural
network computes this function where in and out are vectors:

    
    
        out = f(A*f(B*in))
    

where f is applied to each element of a vector. You can see how this
generalizes to more layers.

Neural network learning algorithms are given (in,out) pairs and try to find A
and B to minimize the mean square error, for example by using stochastic
gradient descent.

~~~
gtt
Could you provide a link to mathematical sound introduction into neural
networks?

I was always turned out because it was always explained in some magical
manner...

~~~
dododo
if f is the logistic function, then neural networks basically correspond to
logistic regression.

<http://en.wikipedia.org/wiki/Logistic_regression>

which has been around since the 1940s... (multilayer neural networks
correspond to hierarchical logistic regression--just plug them together). this
should be in any reasonable stats book.

if you mean mathematically sound in terms of learning, etc, then chris bishops
"neural networks and pattern recognition" is pretty good, full of sage advice
and justification.

------
yummyfajitas
This is why neural networks were very popular for a while - they were sold to
the public in a completely non mathematical way.

~~~
psyklic
The fact they are called "neural networks" is testament to this :)

