
How to build a brain – An introduction to neurophysiology for engineers - juliuskunze
http://juliuskunze.com/how-to-build-a-brain
======
sigi45
I can recommend talks from Joscha who is trying to build an AI based on the
idea how our brain works. Unbelievable great talks.

[https://media.ccc.de/v/31c3_-_6573_-_en_-
_saal_2_-_201412281...](https://media.ccc.de/v/31c3_-_6573_-_en_-
_saal_2_-_201412281130_-_from_computation_to_consciousness_-_joscha)
[https://media.ccc.de/v/32c3-7483-computational_meta-
psycholo...](https://media.ccc.de/v/32c3-7483-computational_meta-psychology)
[https://media.ccc.de/v/33c3-8369-machine_dreams](https://media.ccc.de/v/33c3-8369-machine_dreams)

31c3, 32c3 and 33c3 are the yearly german hacker group ccc conferences at the
end of december.

2017 was 34c3 and held in leipzig. (google img search to get a feel what i'm
talking about
[https://www.google.de/search?q=34c3&source=lnms&tbm=isch](https://www.google.de/search?q=34c3&source=lnms&tbm=isch))

All talks are free to watch (there are a lot more on media.ccc.de).

~~~
Seanny123
To be clear, this article is about neural computation, while Joscha has chosen
to ignore the constraints imposed by this computational paradigm.

~~~
sigi45
Yes but others have asked on how a brain works on system level.

------
subroutine
I think that (speaking as a neuroscientist) one important physiological aspect
this intro glosses over, that would be relevant to any of you machine learning
gurus, is how neurons manage their synaptic weights. Most artificial neural
network models I've seen utilize a partial derivative-based backprop algorithm
to update 'synaptic' weights. In neurons these weights (in the case of fast
excitatory transmission) are proportional to the number of glutamate receptors
currently in a given synapse. During learning (let's say associative learning,
where I associate the sound of your voice to the visual info about your face)
something happens to increase the number of these receptors at key synapses.
That way, the next time the upstream neuron fires, it more easily elicits
activation from the downstream neuron, because it has more receptors at those
synapses. So the same quanta of neurotransmitter release by the neurons
activated by the unique auditory signature of your voice will be released at
all its terminals, but since the downstream terminals for the visual info of
your face have upregulated their receptor number, those neurons are going to
be easily activated. (thus when you call me on the phone, your voice will
readily activate those visual neurons for your face, and I remember what it
looks like).

If you are interested in seeing how the postsynaptic neuron manages its
synaptic receptor numbers (and the intricate process the backprop algo is
cheating), you might like to check out a 3D simulation I've made to model this
process.

Animation:
[https://www.youtube.com/embed/6ZNnBGgea0Y](https://www.youtube.com/embed/6ZNnBGgea0Y)
Code on github:
[https://github.com/subroutines/plasticity](https://github.com/subroutines/plasticity)

(note the animation is not too exciting, so you might want to skip forward a
few times)

But basically you will see that neurons rely on stochastic surface diffusion
of receptors to deliver new receptors to synapses. The takeaway is that during
rest (or whatever you want to call it... baseline, non-learning, etc) synapses
will reach some steady-state number of receptors. If the synapse undergoes
some learning event, it needs to 'capture' more receptors to increase the
steady-state amount. It does this by modifications to proteins just below the
surface of the postsynaptic membrane that act as velcro to surface receptors
floating by. Thus synapses take control of their synaptic weights by managing
the surface diffusion rate of excitatory receptors.

~~~
nabla9
Thank you for interesting comment.

Can you give me rough time scale for how quick neurons learn (synaptic
plasticity)? I have a recollection that the time scale is in order of tens of
milliseconds but I might be wrong.

~~~
subroutine
As @radicalOH mentions, there are several timescales depending on whether you
are talking about immediate, short-term, or long-term synaptic potentiation,
each coordinated by a cascade of events (that have been extensively documented
in you're interested in knowing more).

To specifically address your question about how quickly learning happens in
neurons... it's fast. Is it ~10ms? mmm, that seems incredibly fast, and I'm
not sure I can provide any definitive answer. I think it heavily depends on
what you're willing to interpret as 'learning' when examining events on the
molecular level. There are events that engender near-immediate alterations to
ongoing transmission; and other events that support lasting electrical changes
(necessarily?) develop across longer timescales. You might think of this
distinction as the difference between being able to hold a phone number in
consciousness via repeating it over and over, and then actually being able to
recall that phone number 5 minutes later. In the former certainly something
internally must have changed because a moment ago you weren't repeating this
number in your head, and now you are. On the other hand, can we really say it
was learning if the number is immediately lost the moment it leaves
consciousness. Anyway, I"m not going to wax philosophical on that. I'll just
show you some data for timescales we think are sufficient to produce lasting
increases in synaptic efficacies...

Here is a video showing a single spine just after 2-photon 'glutamate
uncaging':

[https://goo.gl/rfgjrW](https://goo.gl/rfgjrW)

(as I mentioned above, glutamate receptors are the primary receptors involved
in fast-excitatory transmission; so uncaging tons of glutamate right next to a
spine is going to evoke immediate plasticity changes - particularly under
these experimental conditions)

Note that in just a few seconds the spine has doubled in size. Morphological
change in the spine are probably not the first events that alter synaptic
transmission, but morphological changes to the spine are a reliable sign of a
lasting memory trace.

Here is a whole cell recording of the electrical response in a neuron to
incoming signals, before and after stimulation like that above ('EPSC' stands
for 'excitatory post synaptic currents'):

[https://i.imgur.com/0OTMwH5.png](https://i.imgur.com/0OTMwH5.png)

------
posterboy
> Note that this means that for transmission over short distances, we are not
> constrained to all-or-nothing encoding and in fact amplitude-based encoding
> is preferable due to higher data rate and energy efficiency. This explains
> why short neurons never use spikes in the brain.↗ p. 36

Fast high-bandwidth interconnects use this, too, in optical fibre modulation
for telecom, and not just plain Amplitude Modulation either. I guess the
interconnect between the processing nodes of a supercomputer are connected in
a similar manner.

I would really like to know how brain waves figure into this, does the brain
do "wifi" between distant nodes?

> How do brains information?

You accidentally a word! Not just there, by the way.

------
dschuetz
While it's just an introduction, what I am actually missing is the proper
introduction of the neural processing model. How it works in the physical
brain, with potentials and a couple of ! different neurotransmitters is a
whole different matter!

By using just the model of neural processing one can achieve amazing results.
Whether a neuron activates its output links, depends on the weight sum of the
activated input links. If the sum is strong positive the Neuron activates its
output, if it's strong negative the neuron even inhibits activation of
successor neurons. But, even that basic tidbit is missing in the text.

"The neuron adds inputs in some way..." Really? Oh man. I consider myself an
engineer, but that article doesn't say anything about _how_ neurons and neural
nets specifically work.

~~~
swebs
Not sure exactly how biological neurons work, but artificial neurons are just
a weight matrix and some type of sigmoid function. Pretty much all a neuron
does is perform logistic regression.

------
briga
The thing with brains is that even if you understand all the building blocks,
that really doesn't tell you much about how the brain actually works. Trying
to understand the brain from neurons is like trying to understand Microsoft
Word by looking at its machine code. Of course the brain has the added
complexity that it isn't designed by humans, and so it's difficult to pinpoint
where one region ends and another begins.

~~~
topmonk
I disagree that is the reason we haven't understood the brain. If we could
replicate the underlying process, we could attempt to simulate a brain and
maybe successfully so, even if we don't understand why it works.

The main problem seems to be that we actually _don 't_ understand all the
building blocks, at least according to the article:

> There are fundamental questions left unanswered: What information do neurons
> represent? How do neurons connect to achieve that? How are these
> representations learned?

Until we figure out how and why connections are created between neurons, we
only know how the brain responds to stimuli it has already learned, but not
how it learns new information.

------
John_KZ
This is really incomplete and written in a way that's not constructive in any
way. This is not why things are the way they are.

------
marmaduke
This predictably omits large scale features such as boundary conditions and
resulting pattern formations.

Think guitar string: what matters to tune it is the tension more than
composition. In the same way, standing waves in brain tissue reflect geometry
and connectivity as much as membrane potentials.

------
bra-ket
Relevant: "How to Build a A Brain" by Chris Eliasmith
[https://www.amazon.com/How-Build-Brain-Architecture-
Architec...](https://www.amazon.com/How-Build-Brain-Architecture-
Architectures/dp/0190262125)

~~~
eli_gottlieb
Also relevant: _Principles of Neural Design_ by Sterling and Laughlin
[https://mitpress.mit.edu/neuraldesign%20](https://mitpress.mit.edu/neuraldesign%20)

~~~
idrios
Also relevant, though maybe deviating a bit from engineering: Principles of
Neural Science by Kandel and Schwartz [https://www.amazon.com/Principles-
Neural-Science-Fifth-Kande...](https://www.amazon.com/Principles-Neural-
Science-Fifth-Kandel/dp/0071390111)

------
Seanny123
> There are fundamental questions left unanswered: What information do neurons
> represent? How do neurons connect to achieve that? How are these
> representations learned? Neuroscience offers partial answers and algorithmic
> ideas, but we are far from a complete theory conclusively backed by
> observations.

Calling the answers "partial" is fair, but I feel like the author understates
the number of observations backing typical neural models? The lab I belong to
created Spaun, which admittedly uses a pretty simple neuron model, but still
matched a ton of neural and behavioural data! There's also Leabra, which I
have some qualms with, but still has a pretty large collection of results.

------
golergka
> In the brain, slow conductors make it hard to have a synchronized clock
> signal. This rules out digital coding,

But clockless processors exist, although they are not particularly popular.

~~~
posterboy
I don't think "clockless" is apt, asynchronous, sure, but the gates are still,
well ... gated, as far as I can tell, e.g. in the arm amulet research. There's
no central clock, yes, but it's still time discrete, I guess.

------
jlizzle30
> "...slow speed of the sodium pump recovering the resting potential..."

Any neuroscientists out there know if cycles (from graph theory) are possible
in the brain?

Put another way, would the signal generated from neuron1 be able to travel
around in a circuit of other neurons and come back to activate neuron1 again?
Or is this not possible because neuron1 would still be recovering?

I've wondered this for awhile would love it if someone had the answer :)

