
Ask HN: What data structure does our brain use? - Aarvay
Let's have an open discussion on this. We might be able to come up with something!<p>--
Aarvay
======
gregdetre
There are so many differences that one's standard intuitions as a computer
scientist can be very misleading...

I wrote on this elsewhere:

[http://blog.memrise.com/2011/05/how-is-memory-stored-in-
brai...](http://blog.memrise.com/2011/05/how-is-memory-stored-in-brain.html)

[http://blog.memrise.com/2011/05/how-are-brains-different-
fro...](http://blog.memrise.com/2011/05/how-are-brains-different-from-
hard.html)

For instance:

\- Storage and parallel computation in the brain are very expansive and cheap,
so the brain prefers to store rather than compute where it can.

\- Above all, the brain's storage is highly content-addressable. Similar
things in the world are stored with similar representations, so that the brain
can generalize, and see commonalities. This is not a graph - graphs are
discretized - this is much more flexible.

\- Even the acts of storage and retrieval are themselves a kind of
computation, a transformation, a compression and a learning experience.

\- Memories are not clean silos. Storing a new memory can subtly (and not so
subtly) affect other nearby or related memories

\- Different parts of the brain use different storage parameters. For
instance, the hippocampus is like a hash table, storing each memory relatively
cleanly and in isolation, but can only be accessed with exactly the cue. In
contrast, the cortex stores memories in a much more content-addressable,
overlapping way that's invariant to many small differences (e.g. we can
recognize a face whether it's rotated, sunny, tanned, close up, obscured).

~~~
Cushman
What does it say about me that that all makes perfect intuitive sense?

As computer scientists, we get really attached to binary logic because that's
how computers work. I think sometimes it doesnt occur to us that it might not
actually be the best way to model the universe.

~~~
gmt2027
Every time I write non-numerical code, it is fairly obvious that translation
to this unnatural 'intermediate representation' requires extra effort.

------
zerostar07
Our insights in memory formation are still limited. The most studied candidate
is LTP/D[1], the change of weights of synapses between neurons. Memory
formation is a complex process though. While LTP can be induced with a short
burst of spikes, its stabilization and maintainance depends on a sizeable
number of molecules and genes. We still don't know at what level the memories
may be stored, it could be the level of single synapses, groups of synapses,
the level of single neurons or ensembles of neurons.

There are experiments involving fear memory that have shown that a fearful
event can be "stored" in an ensemble of identifiable neurons, and even be
turned on and off [2].

Then there are brain rhythms and sleep. Memories would become overwritten if
they were stored in the same circuits over and over, so there are various
theories about how memories are transferred to various parts of the cortex via
coordinated rhytmic activity or sleep.

Closer to your question, the data structure that our serial thinking brain
uses is language. We think, reason and communicate using it. Language has a
tree-like syntax, but semantics are an unsolved problem. There is even a
theory that suggests that brain rhythms may encode "sentences" into thoughts
via neuronal oscillations[3].

1: <http://en.wikipedia.org/wiki/Long-term_potentiation>

2:
[http://www.silvalab.com.cnchost.com/silvapapers/ZhouNN2009.p...](http://www.silvalab.com.cnchost.com/silvapapers/ZhouNN2009.pdf)

3:
[http://osiris.rutgers.edu/BuzsakiHP/Publications/PDFs/Buzsak...](http://osiris.rutgers.edu/BuzsakiHP/Publications/PDFs/Buzsaki2010Neuron.pdf)

------
espeed
The brain most closely resembles a graph. See Marko's post on "Graphs, Brains,
and Gremlin" ( [http://markorodriguez.com/2011/07/14/graphs-brains-and-
greml...](http://markorodriguez.com/2011/07/14/graphs-brains-and-gremlin/)).

Sebastian Seung is a leading researcher in the field of neuroscience called
connectomics, which studies the wiring of the brain, and he is a professor at
MIT's Department of Brain and Cognitive Sciences. He is focused on mapping the
connections between each neuron and calls the mappings our "connectome," which
he says is as individual as our genome.

He says scientists have hypothesized for years that each thought, each memory
is stored as a neural connection. See his TED talk "I Am My Connectome"
(<http://www.ted.com/talks/lang/eng/sebastian_seung.html>) and the Human
Connectome Project (<http://www.humanconnectomeproject.org>).

~~~
DavidChouinard
I second the recommendation of Seung's TED talk. A great watch.

------
suki
Geoffrey Hinton "Next Generation Neural Networks"

<http://www.youtube.com/watch?v=AyzOUbkUf3M>

-It is more biologically plausible then any other NN algorithm I've seen

-It results in creativity (in the video he has the computer "imagine the number 2")

-It pretty much explains why we need to sleep/dream. The network has to be run both forward (accepting sensory input) and backwards (generating simulated sensory input) in order to learn

-It emphasizes the point that the brain is NOT trying to do matrix multiply (or any other deterministic calculation) with random elements (if it was trying to be an analog computer it would be). The randomness is an essential part of the algorithm.

~~~
reader5000
I agree. Hopfield networks, of which Hinton's Boltzmann machines are
substantial elaborations, have many human-like properties:

-can fill-in details as a result of noisy or missing input -can sometimes "see" patterns in random noise

------
strayer
The question is fundamentally flawed, in that "data structure" is a concept
used by programmers to communicate with computers (or between programmers).

A comparable question would be wondering whether the computer you are using
right now, at this point in time, is inside a for loop, or a while loop, or
it's just using tail recursion.

Sure some cognitive scientist can make my point more explicit, sorry that all
I can only offer is a counterexample.

~~~
mechanical_fish
Yes, isn't our conception of data structures bound up fairly tightly with the
available storage media and data buses? We have thought a lot about how to
organize and retrieve data on tapes, and spinning disks, and random-access
memories composed of discrete little bins, each storing a bit and addressed in
rows and columns.

But we don't make extensive use of (say) analog computers, or computers with
data buses having several million sub-channels. Hand us a machine employing
such principles and it's back to square one. (Except for the lucky pure
mathematician or two who got there first but whose work remains obscure right
now, the way George Boole's work used to be obscure.)

And my stupid examples are just examples - I have no idea if the brain, or any
bit of it, is best conceived of as an analog computer. Nobody knows what kind
of computer the brain is like, except that it is almost entirely unlike the
silicon-based digital computers that we build in the von Neumann tradition.
And, presumably, when it comes time to discuss brain-based data structures
they will turn out almost entirely unlike the structures in our digital-
computer-data-structures textbooks.

~~~
jonnytran
Agreed. Another example is quantum computing.

I'm actually really surprised that no one has even mentioned the idea that the
brain may be a quantum computer. Check out this Google Tech Talk entitled
"Does an Explanation of Higher Brain Function require references to Quantum
Mechanics" by Hartmut Neven. <http://www.youtube.com/watch?v=4qAIPC7vG3Y>

------
gmt2027
Short answer: A graph.

All data structures are simplified graphs.

The state of the physical universe is a massive graph in which interconnected
objects are themselves massive assemblies of graphs of atoms and the atoms are
graphs of subatomic particles. It's graphs all the way down. The properties of
all systems - physical, chemical, economic, biological - emerge from the
interactions between simple connected elements.

My opinion is that all knowledge is representable as a connected graph. The
disconnect between our computers and our minds arises from the fact that
brains are categorically not numerical machines but graph processing and
pattern recognition engines. Neural networks are the underlying hardware and,
with the typical elegance of nature, these are also graphs.

It should be possible to build a graph based language. The basic "Elements"
[SICP] are easy to realise:

1\. Primitive Expressions are graph nodes. They have identity and not much
else.

2\. Means of combination. Graphs can be added, subtracted etc.

3\. Abstraction. A graph can be abstracted into a single node. We have no
problem looking at a complex assembly of components as a single entity.

Since Google and Facebook are two massive platforms whose value arise from
direct interaction with planet-scale graphs with billions of nodes, would
these platforms be easier to build if our computers were more graph oriented?
I would like to believe so.

~~~
shasta
And a graph is a relation, and a relation is a function, etc. Just because you
can model everything with graphs doesn't make them special. You can model
everything with lots of things.

~~~
Dn_Ab
A correction. A relation is not a function but a function is a relation. A
function is a restriction of a relation such that for each thing_a in A and
thing_b in B, for a pairing of (thing_a's,thing_b's) by some relation f,each
thing_a can only be paired with one thing_b in B.

~~~
shasta
A relation between X and Y is a function X -> Y -> Bool.

~~~
Dn_Ab
Yes, sorry, you are right. You were talking about representable/modelled by -
which I was conflating with _Equivalent as Is_. A subtle distinction I missed.

------
cydonian_monk
Along a similar subject, should we someday be able to replace sets of neurons
in our brains with nanomachines (built to replicate neurons), would we be able
to replicate these data structures? Or are they something intrinsic to our
organic brain? Similarly, if we could build even a larger scale version of
such external to the brain, could we interface it with our existing
consciousness? (Could we then 'share' memories?)

Working backwards from this, could we build (today) a graph-based (or
whatever-structure) recording device that will store data in much the same way
we build memories? Such would obviously require a greater understanding of the
interconnection of neurons and the storage of memory, as discussed elsewhere
in this topic. [Small edit: Knowing the data structure is nice; being able to
use it is golden.]

I personally believe the next great "hack" our species should embark upon is
the brain and the body. We need to be more robust if we are ever to escape
this rock. (And the dreamer in me wishes we could keep our consciousnesses
and/or memories around eternally, but that introduces entirely different
problems, and is probably an unrealistic ideal.)

------
chubot
A big part of the brain is the divide between conscious and unconscious. Your
brain is constantly making random associations. If I read an article about
Steve Jobs, I might remember something a friend said 10 years ago about him;
and then I might remember that this friend lives in Brooklyn now; and then
think about other people I know who live in Brooklyn. The brain just likes to
make connections; perhaps synesthesia is an example of it getting slightly
overworked.

I would say the unconscious part is basically the equivalent of a visiting a
web page, Googling every term on that page, visiting those pages, and repeat
nauseum (so like an inverted index perhaps on concepts/ideas/sensations rather
than terms).

As for the conscious part, that's where the magic is. The unconscious brains
generates breathtaking amounts of useless crap, but the conscious brain
manages to filter it and do things like design software and make movies. I
can't really speculate on how the conscious mind does this. Creativity is
different than recall.

There is a feedback loop too. If your conscious mind starts ruminating on
stuff, then the unconscious mind will generate more of it. We had a discussion
about how writing down dreams causes you to produce/remember more of them.
There is also the phenomena of playing a game like Tetris or Scrabble, and
then your unconscious brain starts "rehearsing" all the moves in the
background (sometimes against your will). It knows what you've been doing and
just starts going off and making connections.

(If you are interested in this general subject, read "On Intelligence" by Jeff
Hawkins. It will at least get you thinking and he has pretty fairly concrete
ideas. He doesn't go into what I am writing about here, but as far a books
that pertain to your question, I was reminded of it.)

------
beej71
On Monday, it starts a low-speed read of a big array off magtape. Tuesday
through Thursday, it continues to read the array. On Friday, a random number
is chosen, and the activity stored in that array element is performed Friday
night, possibly setting the REGRET flag. Saturday is spent shuffling the
array, while on Sunday, a high-speed write of the data is performed back to
tape. At 12AM Monday, the brain executes a GOTO 10 instruction, and the
process repeats.

------
vga15
I'd like to imagine the brain is a lot like the 'world wide web' as a data
structure, than a pure graph.

That it isn't just a question of pure storage and retrieval. There's quite a
bit of varied experience on the same root(& their storage) happening multiple
times and layers within.

Content that gets shared/liked more, gets replicated, re-iterated on,
transformed, re-tagged. As time progresses, you'll find more content similar
to the parent, being generated -- re-experienced via dreams and the
subconscious.

Eventually, when its necessary to dig out the piece of content using tags or
searches, it'll end up finding the most 'linked-to' piece. Possibly one that
was associated with a explosion of favorable chemicals.

\-----

There's some evidence to suggest that the same experience, isn't stored as a
single piece of memory. During the process of consolidation [when a long term
memory gets etched], the 'experience' being transcribed goes through
iterations. With variations being stored as well, some of them decaying almost
instantly.

It's possible the brain applies 'instagram' like filters while etching these
memories. (a process that happens over weeks)

It'll be interesting if in the future, we could modify/augment these 'filter'
processes. Both at the storage and retrieval stages.

[<http://pubs.acs.org/cen/coverstory/85/8536cover.html>]
[<http://en.wikipedia.org/wiki/Engram_(neuropsychology)>]

------
aufreak3
The most recent claim in this that I've heard about and on which people are
willing to bet money is Numenta's "Hierarchical Temporal Memory" = "HTM". Jeff
Hawkins (of Palm's Graffiti fame) is behind this and he also wrote a book
called "On Intelligence" which discusses some of these ideas.

It looks like Numenta is making some steady progress in trying to
commercialize its HTM technology as well.

------
shriphani
Some observations I came across in a cognition class:

-> We can perform a visual lookup (identify the circle colored differently from these other circles in this group of circles) in O(1) time. Or if we are asked to locate a friend from an array of people (array fits in field-of-view), we can identify said friend in constant time

-> By nature, we classify objects based on how we use them. So a table and a chair have the same physical 4-legged, flat-top structure. But we differentiate them because we use them differently:

So, my guess :

-> A highly trained decision tree allowing us to perform classification of objects in our environment based on their use. (the training set is whatever is in our field of view and as such we are bombarded with large amounts of data). A hebbian-rule based ANN for training.

For dealing with visual stimulus at-least, I would bet that this is the model
we are using right now.

Also, our classifier seems to be operating in parallel on all the objects
available in our FOV.

------
zipdog
It's worth noting that the brain changes as we learn - for e.g. when we learn
maths the brain's structure is altered in discernable ways:

<http://med.stanford.edu/ism/2011/june/math.html>

So quite possibly the data storage used by a person with one particular
upbrining/education would differ from a person with a different one.

In addition, different parts of the brain are likely storing data in different
ways.

------
woodson
There has been done a lot of work on this issue specifically to the mental
lexicon, of lexemes/entities of language, and its storage. A lot of it was
based on psychological testing, for example reaction times when words are
presented that are somehow associated etc.

I guess that, while on a neurological level storage of different types of
memory might work similar, the 'data structure' is perhaps different.

------
aangjie
Well, my bet is on it being something of a mix between bloom filter and a
graph. i.e the interconnectedness of graph and the probabilistic nature of
Bloom filter both would be a fundamental element. Ofcourse, this will make
sense only as a simulation model of the black box that's our
brain/neurons/neural pathways.

------
gbog
Interesting answers here, but from their diversity it seems to me that we in
fact still don't know much about cognitive representations and their physical
anchoring in the brain. I once worked in a lab, studying how the brain sees
through the eyes, it wasn't close to explain how we know what is what we see.

------
tgflynn
I think of it as a concept graph . Nodes represent concepts and weighted edges
navigable relations between concepts. Further one can associate excitation
levels with concepts.

At any given instant there is a set of excited concepts. Then there is some
algorithm for time evolution of the excitations.

------
brettvallis
Muscle memory, or engrams? How about: off CPU EEPROM with very slow burn in
rate. The computation is handled remotely, or distally, but the computation
only becomes overriding after a certain number of like computations. Can be
unlearnt: erased.

------
jtchang
So many different answers. We really don't know much about how our brain
really encodes the data and we are so far away from actually reproducing it.
Scary when you think all of us have a copy of this memory structure.

------
Aarvay
Just did a crappy summary : [http://blog.aarvay.in/brain-is-the-most-
complicated-data-str...](http://blog.aarvay.in/brain-is-the-most-complicated-
data-structure)

------
jamalkumar
[http://www.quantumconsciousness.org/documents/membytespublis...](http://www.quantumconsciousness.org/documents/membytespublished.pdf)
I'll just leave this here

~~~
jonnytran
A brief summary for us non-chemist CS kids would be helpful. Thanks.

~~~
jamalkumar
[http://www.amazon.com/Emerging-Physics-Consciousness-
Frontie...](http://www.amazon.com/Emerging-Physics-Consciousness-Frontiers-
Collection/dp/3540238905) << Good luck finding a 'brief' summary. It's a lot
of theoretical shit that isn't absolute yet but you CS kids would do well to
take a tip from your mind and move beyond booleans and algorithms into
multivalued logical systems and logarithms:
www.fuzzytech.com/binaries/ieccd1.pdf

(I bet you can find a copy of the aforementioned book on <http://library.nu>
as it contains a pretty wicked intro to many different paradigms regarding
these matters)

------
saintfiends
This is an interesting topic which I have not found an adequate answer.

I wish we could someday know this and learn to control what we want to store
for later use and what not to.

------
felipernb
I believe it would be a graph, but there's also a "hashmap cache" with O(1)
access for the most used nodes of the graph indexed by keywords or key-actions
:)

------
kandu
The right response is that science doesn't know yet.

~~~
rmz
Well, yes. Science doesn't know, but that doesn't mean that scientists doesn't
speculate, or that some of those speculations may at some point be shown to be
more or less correct. Not knowing in a scientific way is a very interesting
kind of "not knowing" :-)

------
jackylee0424
A list (with less than 10 elements a time) to store easy things. I think only
few people's brains can function like a graph.

------
drycnyc
They're all wrong! It's just a massive array of Strings. No wonder I take so
long to recall anything.

------
nicholas22
There are multiple. I suppose long-term memory is a hashtable and working
memory is a stack ;)

~~~
tzs
I think long term memory is more organized than a hash table. I base this on
observation of my own brain during a bad LSD trip, where I seemed to be aware
of how I was reaching decisions.

For example, if I looked at a door and wanted to remember what that door was
for, I could "see" a vast 2 or 3 dimensional array of hundreds of doors that I
had seen before, arranged so that similar doors were near each other, and then
I was aware of some kind of focusing in on the region of door-space where
doors similar to the target door resided, and then I was aware of some kind of
comparison of each of the doors in that region serially with the door I was
looking at to find a match, and then the data for the best match was made
available to my normal consciousness.

During another part of that trip I was aware of audio processing. I'd hear
someone talking and first hear it as unintelligible sound, then I'd be aware
of the sounds being broken down into separate sound units, and then recognized
as English words, and then the relationships between the words being
recognized, and then I'd become aware of the meaning of what I'd heard.

I also had a time during that trip where I was watching visual processing. I'd
be aware of how my mind was noticing things in the scene I was looking at,
recognizing objects and remembering what they were, and combining that to
build an understanding of the scene.

Now I have no way of knowing how much of this was just hallucinations shaped
by my knowledge of computers, and how much really was the LSD actually letting
me observe consciously mental processes that normally operate as black boxes
to the conscious.

It's too bad research using LSD was greatly curtailed when the drug was
banned. My suspicion is that what I perceived on that trip was a mix of
reality and hallucinations driven by my computer knowledge. With enough
experimentation, with people with different backgrounds observing and
reporting, we could probably get some real insights into what is really going
on in there.

~~~
nopassrecover
I found LSD did not introduce things that weren't present already. Instead, it
removed the filters that are necessary on a daily basis for me to get things
done, and suddenly I was aware of and appreciating all the things that my
subconscious normally sees and dismisses. The most vivid example of this was
an awareness of the patterns and textures surrounding us all the time, as well
as the relationship between spatial objects (e.g. a sign or a tree stood out
as significant, rather than part of the "background").

------
sathishmanohar
I also try to make sense of this, but it is too complex, where do we store
imagination?

------
begriffs
Why are programmers so self-involved? The world is not a computer, and the
human mind is not digital. It would be like a forum post on
news.clockmakers.com asking, "Which wind-up clock spring does our brain use?"

------
Vargas
An associative array: <http://en.wikipedia.org/wiki/Associative_array>

------
afaulkner
checkout numenta.com

------
PaulHoule
a neural network

~~~
Aarvay
Can you elaborate?

------
teflonhook
RC circuits or delay lines

