
The Surprising Relativism of the Brain’s GPS - dnetesn
http://nautil.us/issue/67/reboot/the-surprising-relativism-of-the-brains-gps-rp
======
IshKebab
Surely our current knowledge of artificial neural networks shows that a single
cell is unlikely to encode for a simple disentangled concept like absolute
position, or even predicted future position?

I know artificial and real neural networks are only loosely similar, but I
would be surprised if real neural networks did not also have distributed
representations.

~~~
RootGenerated
There are many areas of the brain that have very different properties from
each other. This makes sense when you consider that they likely solve
different (sub) problems. While a distributed representation is very useful
for things like semantics clustering, it actually hurts the ability to
discriminate between similar things when you need to remember a specific
relationship between things. This is the case in the hippocampus where we
store memories of events, as well as the structure of the world (e.g. a map of
sorts as mentioned in this article). So, an artificial neural network probably
isn't the best model for this.

~~~
IshKebab
Artificial neural networks _also_ have different regions that have very
different properties from each other. They have convolutional layers and fully
connected layers, and some architectures have separate semantic labelling and
visual processing parts. Take a look at Mask R-CNN for example:

[https://medium.com/@jonathan_hui/image-segmentation-with-
mas...](https://medium.com/@jonathan_hui/image-segmentation-with-mask-r-cnn-
ebe6d793272)

I don't think that is evidence that the brain doesn't use distributed
representations.

