
Neuroscientists create ‘atlas’ showing how words are organised in the brain - arashdelijani
https://www.theguardian.com/science/2016/apr/27/brain-atlas-showing-how-words-are-organised-neuroscience
======
kough
Most interesting passage to me:

> Strikingly, the brain atlases were similar for all the participants,
> suggesting that their brains organised the meanings of words in the same
> way. The scientists only scanned five men and two women, however. All are
> native English speakers, and two are authors of the study published in
> Nature. It is highly possible that people from different backgrounds and
> cultures will have different semantic brain atlases.

Would love to see this study performed on a much larger, diverse, group, to
see what similarities and differences there are. Also, same person at
different ages.

------
tom_wilde
Interactive 'browser' for this:
[http://gallantlab.org/huth2016/](http://gallantlab.org/huth2016/)

------
BonsaiDen
This would definitely be very interesting in regards to language study.
Repeating the experiment with people which are actively in the progress of
learning a new language every few weeks could give some remarkable insight in
how the new words are being mapped onto the brain.

------
deepnet
Interesting to compare this semantic map with the semantic maps that Mikolov's
Word2vec processes produce and Word2vec's resultant semantic vector math (King
- Man + Woman ~= Queen).

Perhaps Hinton's Vectors of Thought do have a wetware analogy.

~~~
igravious
What would this mean if so? And if not, what would that imply? :)

~~~
deepnet
In terms of (Artificial) Neural Networks it implies their internal
representation is functional not merely symbolic.

Soumith has shown this applies to visual modalities and smile vectors or
wearing sunglasses vectors are similarly present[1].

This types of vector algebra has some use in translating between languages and
modalities.

Karpathy[2] has demonstrated an internal vector of a descriptive sentence can
'plugged into' an image recognising network and used to find vectors
representing pictures.

Perhaps a sentence vector could even generate a picture.

What Neural Nets do internally is still mysterious.

(A Highly speculative example) If DeepGo's internal vectors could translate to
the modality of plain English we might glimpse its 'mind' at work.

What this means for brains is anyones guess, I find inspiration in Geoff
Hinton's guesses[3].

[1] - searching

[2]
[https://www.youtube.com/watch?v=ZkY7fAoaNcg](https://www.youtube.com/watch?v=ZkY7fAoaNcg)

[3] [http://www.computing.co.uk/ctg/news/2409871/document-
search-...](http://www.computing.co.uk/ctg/news/2409871/document-search-
engines-will-be-able-to-think-and-reason-like-people-argues-ai-expert)

~~~
jawarner
The brain evolved, and neural nets are trained, to optimize for high-density,
"low-cost" information storage. So it would make sense if there is a
structural correlation between the two in terms of how information is
represented. I suspect that more biologically realistic neural nets would lead
to more correlation.

------
shireboy
Not sure I'd call fine-tuning political brainwashing "benign". Your Jedi mind
tricks will not work on me.

------
brudgers
Previously:
[https://news.ycombinator.com/item?id=11584933](https://news.ycombinator.com/item?id=11584933)

