
Grid-Cell Like Deep Layers in Navigation AI - SubiculumCode
https://www.nature.com/articles/s41586-018-0102-6
======
baylearn
This DeepMind work has been gathering a lot of hype, but it is very similar to
this work by Cueva and Wei:

Published on Arxiv on March 31:
[https://arxiv.org/abs/1803.07770](https://arxiv.org/abs/1803.07770)

Published at ICLR 2018, a well-known machine learning conference:
[https://openreview.net/forum?id=B17JTOe0-](https://openreview.net/forum?id=B17JTOe0-)

It is rather disheartening that they didn't acknowledge the -- seems like a
deliberate oversight.

The Cueva and Wei paper is well written, explains concepts clearly, without
the hype, and published in an open access venue.

~~~
yokaze
Well, the paper was received from Nature on 05 July 2017, so it isn't
surprising that they didn't cite the work from Cueva and Wei published 21 Mar
2018.

~~~
apl
That's not quite how the process works. These papers go through multiple (> 2)
revisions. At any iteration, there's ample opportunity for updating
references. This applies double given how long the Cueva/Wei work has been
available (preprint & CCN'17 (?) contribution).

------
inarrears
The link says I have to pay $8.99 to "rent" the article, or subscribe every
year for $199. WTF? Is this science?

Why can't people publish research in open access journals or conferences?

~~~
zerostar07
Deepmind has an unfortunate history of publishing in Nature for the prestige.
Maybe because Hassabis was a neuroscientist and acquired that bad habit. But
does it matter in the CS/ML world? I think it doesn't, because a number of
people have done it with papers that range very widely in significance (e.g.
this is much less significant than the original deep q learning paper that was
also in nature iirc)

~~~
naturalgradient
From my observation (from knowing a few people there from my lab), DeepMind is
still organised relatively similarly to university groups.

Professors moving over are made to lead research teams, and while there may be
some overall organisation goals, researchers still care about their individual
career advancement and prestige as measured by paper output. So individually,
most people working there are incentivised towards doing what is best for
their individual prestige in the research community for after leaving
deepmind.

------
goldenkey
Hiearchichal Temporal Memory is exactly what this is. HTM has been
successfully used to create artificial hypocampus. Watch the video in the
first link below and try not to be impressed.

Links:

[https://discourse.numenta.org/t/oscillatory-thousand-
brains-...](https://discourse.numenta.org/t/oscillatory-thousand-brains-minds-
eye-for-htm/3726)

[https://en.m.wikipedia.org/wiki/Hierarchical_temporal_memory](https://en.m.wikipedia.org/wiki/Hierarchical_temporal_memory)

