
Artificial Neural Nets Grow Brainlike Navigation Cells - digital55
https://www.quantamagazine.org/artificial-neural-nets-grow-brainlike-navigation-cells-20180509/
======
jimfleming
To draw too many parallels here would be like comparing stick figures to still
life paintings and proclaiming "They're both flowers!" While it might be true
you won't learn much about still life paintings from stick figures.

At best this research says something about the task of navigation and optimal
representations for that task rather than anything profound about neural
networks other than they can both optimize for some task—which should surprise
no one.

~~~
chiefalchemist
Things get over-stated because there would be nothing worthy of publishing
otherwise. It's the same (f'd up) model that drives the MSM, etc.

The Internet...access to all the information in the world; most of which isn't
new or worth knowing about. Bring a shovel. You're gonna need it.

~~~
dmix
Just because it's not some profound connection doesn't mean the analogy isn't
interesting in itself. The author maybe should have toned it down but it was
still interesting and blog post worthy IMO.

~~~
chiefalchemist
Perhaps. But why overblow it? That makes me suspect.

I confess, I've spent far too many cycles reading "interesting" things only to
have close to no memory of them 2 or 3 days later.

My conclusion? Novelty does not equal an increase in the quality of my life.

That's not to discount (less impactful) entertainment; only to say (that for
me) interesting isn't enough any moew, it's too often not worth the time suck.

~~~
airstrike
There are literally dozens of us who feel the same way, yet I for one am still
looking for a way to cope.

~~~
abakker
Back in the old days, people paid for this. I don’t mean to sound snarky, but
in a small way, coping with the novelty is my job (as an industry analyst) and
some people chose to pay for that. If you need to know, but can’t cope, then
you pay someone else to do it who can specialize.

This does not solve the problem of things that get published which you don’t
need to know and which flood your everyday life. I can’t help but feel that
that is an intrinsic part of the internet- Both a feature and a bug.

------
wyattpeak
> The “grid units” that spontaneously emerged in the network were remarkably
> similar to what’s seen in animals’ brains, right down to the hexagonal grid.

Could someone with more experience in ML explain what this means? In what
sense do NN cells have positions or geometry? What are the NN heat maps below
the quote showing?

~~~
mr_toad
The neurons themselves don’t form grids. A map of the points where firing of a
grid neuron in happens the real world forms a triangular/ hexagonal grid.

These neurons seem to have discovered what board gamers found out much later -
hexagonal grids are better for calculating movement.

[https://en.m.wikipedia.org/wiki/Grid_cell](https://en.m.wikipedia.org/wiki/Grid_cell)

~~~
wyattpeak
Ah, thank you. That makes a lot more sense.

------
limsup
Link to pdf: [https://sci-
hub.tw/https://www.nature.com/articles/s41586-01...](https://sci-
hub.tw/https://www.nature.com/articles/s41586-018-0102-6.epdf?author_access_token=BjM-5BdGxd14c17YFA6PsdRgN0jAjWel9jnR3ZoTv0OEfySMT4t78PpPpCS7uExW3njb8Q4UlgcwRM32WwBCKZs73SThwkfI42wHhFEtJM-Y7sQxDsR1cR7_C9Kq1GwuxGJn46kzRnujvrDMGzc4TQ%3D%3D#)

~~~
TaylorAlexander
Thanks. I was curious what it takes to get it from Nature. For the low low
price of $199 you too can buy a subscription!

As a roboticist just beginning to read ML papers (to help in this very
field!), this information would otherwise just be out of reach.

------
taliesinb
On a train and don't have great wifi -- but there was a super-cool poster at
ICLR which demonstrated that training an RNN to perform dead-reckoning
naturally produced grid-like cells. Is this an extension of that work? Or an
independent discovery of the same phenomenon?

I'm trying to reproduce that original work, so far without success.

~~~
jimfleming
I think you're referring to this paper:

"Emergence of grid-like representations by training recurrent neural networks
to perform spatial localization".
[https://arxiv.org/abs/1803.07770](https://arxiv.org/abs/1803.07770)

It appears to be from Columbia vs DeepMind with different authors.

------
jcims
I wonder what the coordinate plane is for the ML visualizations and how it
relates to same for visualizations from a physical brain. Seems ripe for
gaming.

~~~
zamalek
As it is dead reckoning, I assume it is relational/velocity.

------
eboyjr
This supports the idea that, increasingly, machine learning is coming back
full circle to support neuroscience. Previously, AI researchers looked at the
brain for inspiration and now more than ever neuroscientists are being
inspired by advances in deep learning, etc.

~~~
yosito
I don't doubt that there are many similarities between how machine learning
works and how brains work, but this seems like a pretty myopic trend of
confirmation bias. Neurons and brains are so much more complex than machine
learning and it will be really unfortunate if we limit ourselves to the
machine learning model in neuroscience.

~~~
2bitencryption
maybe the functional nodes (wetware neurons vs software neurons) are very
different, but it seems like the way they are manipulated via back-
propagation, pooling, recurrence, layerings, etc, are similar, right?

because, at the end of the day, it's more about how behavior is emergent than
how behavior functions physically, I would say.

I would guess that if we ever get to some true sci-fi AI "consciousness", it
would just be a hyper-scaled version of what we already have. But that's just
fun speculation.

~~~
tormeh
Real cells use Hebbian learning, which in some cases is equivalent to back-
propagation, but is way less efficient. Otherwise, yes, many of the techniques
used in ML is also used in the body. Not only ML, actually, but also
electrical engineering and surely many other fields. The more you learn about
the body the more machine-like it will look to you. In some respects. In
others, it's fucking space technology. Nanobots exist, they are called
proteins, and each of your cells have hordes of them, for example.

~~~
stochastic_monk
The big difference is that biological neurons emit binary outputs. Because of
this, there’s no gradient and they can’t be trained by SGD.

~~~
yorwba
But biological neurons are also stochastic, and the firing probabilities are
continuous, so you can do SGD on them.

------
bra-ket
why use "deep reinforcement learning" at all, is there any basis to believe
it's a valid model of biological learning?

~~~
bitL
Natural reinforcement learning [1] was known before the mathematical one. Of
course, people often mistake Markov chains with reality, but they still can be
useful even if in completely unexpected ways like with DRL.

[1] Rescorla RA, Wagner AR. A theory of Pavlovian conditioning: variations in
the effectiveness of reinforcement and nonreinforcement. In: Black AH, Prokasy
WF, editors. Classical conditioning II. New York: Appleton-Century Crofts;
1972. pp. 64–99.

~~~
bra-ket
looks like reinforcement learning people have been missing out on the last 70
years of cognitive psychology. Stimulus-response theories of classical
conditioning have been a subject of controversy since at least the 50s[1],
these are very poor cognitive models

[1]
[https://en.wikipedia.org/wiki/Behaviorism#Criticisms_and_lim...](https://en.wikipedia.org/wiki/Behaviorism#Criticisms_and_limitations_of_behaviorism)

------
bluetwo
Give me a valuable use for this and I'll give you an up-vote.

