
NanoNeuron – simple JavaScript functions that explain how machines learn - trekhleb
https://github.com/trekhleb/nano-neuron
======
kuczmama
This is fantastic. Probably the most approachable example I've seen

~~~
trekhleb
Thanks! Glad that example was useful!

------
anonfunction
I like this a lot, here's a very simple machine learning perceptron[1] example
in Golang:

[https://github.com/montanaflynn/simple-go-
perceptron](https://github.com/montanaflynn/simple-go-perceptron)

1\.
[https://en.wikipedia.org/wiki/Perceptron](https://en.wikipedia.org/wiki/Perceptron)

------
cr0sh
@trekhleb: Can you modify the images used in the README.md - they seem to be
PNG files with a transparent background and the formula (or whatever is there)
in black; on github in "dark mode" (or with a stylus or similar css modifier
for a dark mode) the black text becomes unreadable. t/y

~~~
trekhleb
Sure. Done.

------
pumanoir
@trekhleb thanks for this write up, it is fantastic. Are you planning on a
similar explanation for neural nets?

~~~
trekhleb
It is possible. But don't have strict plans yet.

------
greatgib
Awesome and perfect to explain machine learning to uninitiated persons. Really
a great great work!

~~~
trekhleb
Thanks!

------
fhrufhfjrjf
Nice work op. There is some stuff at fast.ai that you might also enjoy. I
recently learned about Swift for TensorFlow from one of their videos which
modifies Swift to allow for placing attributes over functions and then having
the compiler automatically generated the derivatives of those functions.

------
rkagerer
Approachable, but article could use a grammar check and a bit of linguistic
editing.

~~~
m00dy
I believe OP's mother tongue is not english. So, I think it is fine

~~~
diegoperini
Technical writing is hard. Kudos to the writer for trying hard.

------
imvetri
Is there any alternative modal for neural networks without math in it?

~~~
madhadron
Sadly, no. This OP's writeup is just about the simplest exposition that
doesn't miss the point, and it misses the major transition when you go to
layers with nonlinearities between them.

There is no royal road to mathematics.

~~~
cr0sh
> This OP's writeup is just about the simplest exposition that doesn't miss
> the point, and it misses the major transition when you go to layers with
> nonlinearities between them.

Well - the activation function appears to be linear, and thus close to RELU -
and so (maybe?) there wouldn't be any issue with the so-called "vanishing
gradients" problem? But I am just a hobbyist who dabbles in this topic, so my
opinion doesn't carry much weight...

~~~
madhadron
There is no activation function. It's linear regression.

~~~
derangedHorse
To be fair, there’s also no layers in linear regression

