I wrote on this elsewhere:
- Storage and parallel computation in the brain are very expansive and cheap, so the brain prefers to store rather than compute where it can.
- Above all, the brain's storage is highly content-addressable. Similar things in the world are stored with similar representations, so that the brain can generalize, and see commonalities. This is not a graph - graphs are discretized - this is much more flexible.
- Even the acts of storage and retrieval are themselves a kind of computation, a transformation, a compression and a learning experience.
- Memories are not clean silos. Storing a new memory can subtly (and not so subtly) affect other nearby or related memories
- Different parts of the brain use different storage parameters. For instance, the hippocampus is like a hash table, storing each memory relatively cleanly and in isolation, but can only be accessed with exactly the cue. In contrast, the cortex stores memories in a much more content-addressable, overlapping way that's invariant to many small differences (e.g. we can recognize a face whether it's rotated, sunny, tanned, close up, obscured).
As computer scientists, we get really attached to binary logic because that's how computers work. I think sometimes it doesnt occur to us that it might not actually be the best way to model the universe.
This captures a little of the nature of semantic representations (storage of meanings and concepts). But of course, semantic representations differ hugely from, say, representations of a tennis serve, or the phone number you're repeating under your breath while you key it in...
Haven't heard of memrise before. Yet, having read your posts, I can't wait to try it out.
Also, content-addressability may be implemented fairly independently of the actual structure, that is an encoding problem that is analogous to an error correcting code in a higher dimensional vector space.
In contrast, long-term storage involves permanent changes in the synaptic weights between neurons, which survive any fluctuations in activation, and can then subtly influence computation ever after.
And for medium-term storage (from minutes to months, say), you have the hippocampus, which has a big hash table of pointers to long-term structures.
Of course, this is all a huge simplification :)
There are experiments involving fear memory that have shown that a fearful event can be "stored" in an ensemble of identifiable neurons, and even be turned on and off .
Then there are brain rhythms and sleep. Memories would become overwritten if they were stored in the same circuits over and over, so there are various theories about how memories are transferred to various parts of the cortex via coordinated rhytmic activity or sleep.
Closer to your question, the data structure that our serial thinking brain uses is language. We think, reason and communicate using it. Language has a tree-like syntax, but semantics are an unsolved problem. There is even a theory that suggests that brain rhythms may encode "sentences" into thoughts via neuronal oscillations.
Sebastian Seung is a leading researcher in the field of neuroscience called connectomics, which studies the wiring of the brain, and he is a professor at MIT's Department of Brain and Cognitive Sciences. He is focused on mapping the connections between each neuron and calls the mappings our "connectome," which he says is as individual as our genome.
He says scientists have hypothesized for years that each thought, each memory is stored as a neural connection. See his TED talk "I Am My Connectome" (http://www.ted.com/talks/lang/eng/sebastian_seung.html) and the Human Connectome Project (http://www.humanconnectomeproject.org).
-It is more biologically plausible then any other NN algorithm I've seen
-It results in creativity (in the video he has the computer "imagine the number 2")
-It pretty much explains why we need to sleep/dream. The network has to be run both forward (accepting sensory input) and backwards (generating simulated sensory input) in order to learn
-It emphasizes the point that the brain is NOT trying to do matrix multiply (or any other deterministic calculation) with random elements (if it was trying to be an analog computer it would be). The randomness is an essential part of the algorithm.
-can fill-in details as a result of noisy or missing input
-can sometimes "see" patterns in random noise
A comparable question would be wondering whether the computer you are using right now, at this point in time, is inside a for loop, or a while loop, or it's just using tail recursion.
Sure some cognitive scientist can make my point more explicit, sorry that all I can only offer is a counterexample.
But we don't make extensive use of (say) analog computers, or computers with data buses having several million sub-channels. Hand us a machine employing such principles and it's back to square one. (Except for the lucky pure mathematician or two who got there first but whose work remains obscure right now, the way George Boole's work used to be obscure.)
And my stupid examples are just examples - I have no idea if the brain, or any bit of it, is best conceived of as an analog computer. Nobody knows what kind of computer the brain is like, except that it is almost entirely unlike the silicon-based digital computers that we build in the von Neumann tradition. And, presumably, when it comes time to discuss brain-based data structures they will turn out almost entirely unlike the structures in our digital-computer-data-structures textbooks.
I'm actually really surprised that no one has even mentioned the idea that the brain may be a quantum computer. Check out this Google Tech Talk entitled "Does an Explanation of Higher Brain Function require references to Quantum Mechanics" by Hartmut Neven. http://www.youtube.com/watch?v=4qAIPC7vG3Y
All data structures are simplified graphs.
The state of the physical universe is a massive graph in which interconnected objects are themselves massive assemblies of graphs of atoms and the atoms are graphs of subatomic particles. It's graphs all the way down. The properties of all systems - physical, chemical, economic, biological - emerge from the interactions between simple connected elements.
My opinion is that all knowledge is representable as a connected graph. The disconnect between our computers and our minds arises from the fact that brains are categorically not numerical machines but graph processing and pattern recognition engines. Neural networks are the underlying hardware and, with the typical elegance of nature, these are also graphs.
It should be possible to build a graph based language. The basic "Elements" [SICP] are easy to realise:
1. Primitive Expressions are graph nodes. They have identity and not much else.
2. Means of combination. Graphs can be added, subtracted etc.
3. Abstraction. A graph can be abstracted into a single node. We have no problem looking at a complex assembly of components as a single entity.
Since Google and Facebook are two massive platforms whose value arise from direct interaction with planet-scale graphs with billions of nodes, would these platforms be easier to build if our computers were more graph oriented? I would like to believe so.
Gremlin is a graph-based language (http://gremlin.tinkerpop.com/).
Heisenberg's uncertainty principle implies that some things are just unknowable. However, only the most basic constituents of physical reality demonstrate noticeable quantum behaviour. The macroscopic universe is fairly deterministic. You don't need to jump in front of a moving bus to learn that Newtonian mechanics applies 100% of the time in a very deterministic way.
You say that the "macroscopic" universe is fairly deterministic, meaning things that are about the same size as your brain. But is that an observation about the universe, or about your brain?
The appearance of "randomness" at very small scales can be explained as non-determinism, or as a deterministic effect of some property that we haven't yet detected.
The point being that the universe has not been shown to have a "fundamentally non-deterministic nature".
Never mind that, though... So you have a general solution to the n-body problem? I'm being facetious, of course. The hardness of the n-body problem isn't necessarily an expression of fundamental randomness rather than technical uncertainty. But one way or another, aren't they both an expression of the same thing?
Modeling an n-body system in the physical universe exactly means modeling every piece of information in the universe. If you don't do that, the unknowns will multiply into significant divergence at some t, however distant. Quantum theory suggests that even if you did have a computer the size of the universe that didn't affect the universe, it is intrinsically impossible to make an accurate prediction.
I can't know where a particle will be in a second, or even if it will exist at all. I can't know if the Earth will be hit by an asteroid in a hundred years. I can't know who will win the Presidential election next year. The more accurately I model these systems, the more their outcome (or rather the outcome of the abstract macro-system of which I become aware) becomes dependent on the few things I don't know-- to the point that simply the act of checking the accuracy of the prediction has an unpredictable effect.
At that point, it seems like splitting hairs to say "Yes, but it's still really deterministic." What "real" are you talking about? Certainly none that I have experience with. But that doesn't matter either, because even granting that:
The universe might be deterministic, and it might be nondeterministic. It's unpredictable. So how does determinism become the default?
> The hardness of the n-body problem isn't necessarily an expression of fundamental randomness rather than technical uncertainty. But one way or another, aren't they both an expression of the same thing?
No, they are not. Randomness is randomness, and uncertainty is uncertainty. It's entirely possible to be have bounded uncertainty about the amount of randomness in a contrived system.
> The more accurately I model these systems, the more their outcome (or rather the outcome of the abstract macro-system of which I become aware) becomes dependent on the few things I don't know
No. The accuracy of your model of a system has no effect on what influences the system.
The uncertainty in the outcome of your model depends on the uncertainty in the inputs, but that's axiomatic.
> to the point that simply the act of checking the accuracy of the prediction has an unpredictable effect.
If and only if your checking mechanism is part of the system. Now, every checking mechanism we're likely to deal with is part of the universe, but that doesn't make the universe nondeterministic, merely impossible to isolate.
> Modeling an n-body system in the physical universe exactly means modeling every piece of information in the universe. If you don't do that, the unknowns will multiply into significant divergence at some t, however distant.
Yes, modelling a deterministic system requires modelling a deterministic system. If you model it except for the influence of some parts, your results will be what the model would have been in the absence of those parts.
If you don't model everything, you won't model everything. This is not an argument in favor of nondeterminism.
> Quantum theory suggests that even if you did have a computer the size of the universe that didn't affect the universe, it is intrinsically impossible to make an accurate prediction.
That is one interpretation; there are a number of interpretations of quantum theory that are deterministic.
> At that point, it seems like splitting hairs to say "Yes, but it's still really deterministic." What "real" are you talking about? Certainly none that I have experience with.
Consider a system with only two values, which we'll call "N" and "time" for the sake of my sanity. We can imagine a purely deterministic system where N = time * 1000. We can also have a nondeterministic system where N = Nprev + (1000 +- 1). If our measurement apparatus can only measure N's value relative to the previous value with an uncertainty of 10, it will be unable to distinguish between the two systems. This does not imply that the systems are the same, and it is not splitting hairs to consider them different. The first is "really" deterministic, and the second is "really" nondeterministic.
If our measurement apparatus improved such that we could measure with an uncertainty of .1, we would be able to establish that the first system was consistent with both nondeterminism and determinism, and the second system consistent only with nondeterminism. Nondeterminism can never be epistemologically ruled out; this does not mean we should conclude that it exists in reality.
> So how does determinism become the default?
Because for all the cases where we have enough data and processing power to test, determinism has been shown to be consistent with the data. If that were not the case, it wouldn't be the default.
Besides which, earlier you asked about the 'fundamentally non-deterministic nature of the universe'. My point is merely that no such nature has been demonstrated, nor is there any evidence to suggest it. There could be - if things often seemed to happen without cause, there would be plenty of evidence of cases where determinism is inconsistent with data. The closest we get is with quantum measurements, and it's far from demonstrated that these are genuinely a case of non-determinism.
Working backwards from this, could we build (today) a graph-based (or whatever-structure) recording device that will store data in much the same way we build memories? Such would obviously require a greater understanding of the interconnection of neurons and the storage of memory, as discussed elsewhere in this topic. [Small edit: Knowing the data structure is nice; being able to use it is golden.]
I personally believe the next great "hack" our species should embark upon is the brain and the body. We need to be more robust if we are ever to escape this rock. (And the dreamer in me wishes we could keep our consciousnesses and/or memories around eternally, but that introduces entirely different problems, and is probably an unrealistic ideal.)
I would say the unconscious part is basically the equivalent of a visiting a web page, Googling every term on that page, visiting those pages, and repeat nauseum (so like an inverted index perhaps on concepts/ideas/sensations rather than terms).
As for the conscious part, that's where the magic is. The unconscious brains generates breathtaking amounts of useless crap, but the conscious brain manages to filter it and do things like design software and make movies. I can't really speculate on how the conscious mind does this. Creativity is different than recall.
There is a feedback loop too. If your conscious mind starts ruminating on stuff, then the unconscious mind will generate more of it. We had a discussion about how writing down dreams causes you to produce/remember more of them. There is also the phenomena of playing a game like Tetris or Scrabble, and then your unconscious brain starts "rehearsing" all the moves in the background (sometimes against your will). It knows what you've been doing and just starts going off and making connections.
(If you are interested in this general subject, read "On Intelligence" by Jeff Hawkins. It will at least get you thinking and he has pretty fairly concrete ideas. He doesn't go into what I am writing about here, but as far a books that pertain to your question, I was reminded of it.)
That it isn't just a question of pure storage and retrieval. There's quite a bit of varied experience on the same root(& their storage) happening multiple times and layers within.
Content that gets shared/liked more, gets replicated, re-iterated on, transformed, re-tagged.
As time progresses, you'll find more content similar to the parent, being generated -- re-experienced via dreams and the subconscious.
Eventually, when its necessary to dig out the piece of content using tags or searches, it'll end up finding the most 'linked-to' piece. Possibly one that was associated with a explosion of favorable chemicals.
There's some evidence to suggest that the same experience, isn't stored as a single piece of memory. During the process of consolidation [when a long term memory gets etched], the 'experience' being transcribed goes through iterations. With variations being stored as well, some of them decaying almost instantly.
It's possible the brain applies 'instagram' like filters while etching these memories. (a process that happens over weeks)
It'll be interesting if in the future, we could modify/augment these 'filter' processes. Both at the storage and retrieval stages.
It looks like Numenta is making some steady progress in trying to commercialize its HTM technology as well.
-> We can perform a visual lookup (identify the circle colored differently from these other circles in this group of circles) in O(1) time. Or if we are asked to locate a friend from an array of people (array fits in field-of-view), we can identify said friend in constant time
-> By nature, we classify objects based on how we use them. So a table and a chair have the same physical 4-legged, flat-top structure. But we differentiate them because we use them differently:
So, my guess :
-> A highly trained decision tree allowing us to perform classification of objects in our environment based on their use. (the training set is whatever is in our field of view and as such we are bombarded with large amounts of data). A hebbian-rule based ANN for training.
For dealing with visual stimulus at-least, I would bet that this is the model we are using right now.
Also, our classifier seems to be operating in parallel on all the objects available in our FOV.
So quite possibly the data storage used by a person with one particular upbrining/education would differ from a person with a different one.
In addition, different parts of the brain are likely storing data in different ways.
I guess that, while on a neurological level storage of different types of memory might work similar, the 'data structure' is perhaps different.
At any given instant there is a set of excited concepts. Then there is some algorithm for time evolution of the excitations.
(I bet you can find a copy of the aforementioned book on http://library.nu as it contains a pretty wicked intro to many different paradigms regarding these matters)
I wish we could someday know this and learn to control what we want to store for later use and what not to.
For example, if I looked at a door and wanted to remember what that door was for, I could "see" a vast 2 or 3 dimensional array of hundreds of doors that I had seen before, arranged so that similar doors were near each other, and then I was aware of some kind of focusing in on the region of door-space where doors similar to the target door resided, and then I was aware of some kind of comparison of each of the doors in that region serially with the door I was looking at to find a match, and then the data for the best match was made available to my normal consciousness.
During another part of that trip I was aware of audio processing. I'd hear someone talking and first hear it as unintelligible sound, then I'd be aware of the sounds being broken down into separate sound units, and then recognized as English words, and then the relationships between the words being recognized, and then I'd become aware of the meaning of what I'd heard.
I also had a time during that trip where I was watching visual processing. I'd be aware of how my mind was noticing things in the scene I was looking at, recognizing objects and remembering what they were, and combining that to build an understanding of the scene.
Now I have no way of knowing how much of this was just hallucinations shaped by my knowledge of computers, and how much really was the LSD actually letting me observe consciously mental processes that normally operate as black boxes to the conscious.
It's too bad research using LSD was greatly curtailed when the drug was banned. My suspicion is that what I perceived on that trip was a mix of reality and hallucinations driven by my computer knowledge. With enough experimentation, with people with different backgrounds observing and reporting, we could probably get some real insights into what is really going on in there.