Hacker News new | comments | show | ask | jobs | submit login
Ask HN: What data structure does our brain use?
135 points by Aarvay 2232 days ago | hide | past | web | 78 comments | favorite
Let's have an open discussion on this. We might be able to come up with something!

-- Aarvay




There are so many differences that one's standard intuitions as a computer scientist can be very misleading...

I wrote on this elsewhere:

http://blog.memrise.com/2011/05/how-is-memory-stored-in-brai...

http://blog.memrise.com/2011/05/how-are-brains-different-fro...

For instance:

- Storage and parallel computation in the brain are very expansive and cheap, so the brain prefers to store rather than compute where it can.

- Above all, the brain's storage is highly content-addressable. Similar things in the world are stored with similar representations, so that the brain can generalize, and see commonalities. This is not a graph - graphs are discretized - this is much more flexible.

- Even the acts of storage and retrieval are themselves a kind of computation, a transformation, a compression and a learning experience.

- Memories are not clean silos. Storing a new memory can subtly (and not so subtly) affect other nearby or related memories

- Different parts of the brain use different storage parameters. For instance, the hippocampus is like a hash table, storing each memory relatively cleanly and in isolation, but can only be accessed with exactly the cue. In contrast, the cortex stores memories in a much more content-addressable, overlapping way that's invariant to many small differences (e.g. we can recognize a face whether it's rotated, sunny, tanned, close up, obscured).


What does it say about me that that all makes perfect intuitive sense?

As computer scientists, we get really attached to binary logic because that's how computers work. I think sometimes it doesnt occur to us that it might not actually be the best way to model the universe.


Every time I write non-numerical code, it is fairly obvious that translation to this unnatural 'intermediate representation' requires extra effort.


It's also very, very high-dimensional. Imagine a gigantic space, where similar thoughts could be placed next to one another. Thinking would be moving around this space. Free association would be taking a step or two in a random direction. Comparison would be a vector.

This captures a little of the nature of semantic representations (storage of meanings and concepts). But of course, semantic representations differ hugely from, say, representations of a tennis serve, or the phone number you're repeating under your breath while you key it in...


Thank you for describing it so well here. [http://blog.memrise.com/2011/05/how-is-memory-stored-in-brai...] The actor/play analogy worked memorably well.

Haven't heard of memrise before. Yet, having read your posts, I can't wait to try it out.


Can you distinguish between the storage schemes in short vs. long term memory? A transition takes place at some point. Long-term storage may not be the best indicator of actual structures used in working memory, which is where the computation actually happens.

Also, content-addressability may be implemented fairly independently of the actual structure, that is an encoding problem that is analogous to an error correcting code in a higher dimensional vector space.


Some forms of short-term storage are volatile, like RAM. They store by coaxing the neural activation into a stable attractor - as long as all the neurons keep firing in sequence, the memory stays alive. This is fast to create, since it doesn't require any hardware writes (changes in synaptic weights).

In contrast, long-term storage involves permanent changes in the synaptic weights between neurons, which survive any fluctuations in activation, and can then subtly influence computation ever after.

And for medium-term storage (from minutes to months, say), you have the hippocampus, which has a big hash table of pointers to long-term structures.

Of course, this is all a huge simplification :)


If you're interested in pursuing this, I'd start here (Ken was my former PhD advisor, but I still think this is a rich but comprehensible introduction):

http://psych.colorado.edu/~oreilly/papers/OReillyNorman02_ti...


Standing waves.


Our insights in memory formation are still limited. The most studied candidate is LTP/D[1], the change of weights of synapses between neurons. Memory formation is a complex process though. While LTP can be induced with a short burst of spikes, its stabilization and maintainance depends on a sizeable number of molecules and genes. We still don't know at what level the memories may be stored, it could be the level of single synapses, groups of synapses, the level of single neurons or ensembles of neurons.

There are experiments involving fear memory that have shown that a fearful event can be "stored" in an ensemble of identifiable neurons, and even be turned on and off [2].

Then there are brain rhythms and sleep. Memories would become overwritten if they were stored in the same circuits over and over, so there are various theories about how memories are transferred to various parts of the cortex via coordinated rhytmic activity or sleep.

Closer to your question, the data structure that our serial thinking brain uses is language. We think, reason and communicate using it. Language has a tree-like syntax, but semantics are an unsolved problem. There is even a theory that suggests that brain rhythms may encode "sentences" into thoughts via neuronal oscillations[3].

1: http://en.wikipedia.org/wiki/Long-term_potentiation

2: http://www.silvalab.com.cnchost.com/silvapapers/ZhouNN2009.p...

3: http://osiris.rutgers.edu/BuzsakiHP/Publications/PDFs/Buzsak...


The brain most closely resembles a graph. See Marko's post on "Graphs, Brains, and Gremlin" ( http://markorodriguez.com/2011/07/14/graphs-brains-and-greml...).

Sebastian Seung is a leading researcher in the field of neuroscience called connectomics, which studies the wiring of the brain, and he is a professor at MIT's Department of Brain and Cognitive Sciences. He is focused on mapping the connections between each neuron and calls the mappings our "connectome," which he says is as individual as our genome.

He says scientists have hypothesized for years that each thought, each memory is stored as a neural connection. See his TED talk "I Am My Connectome" (http://www.ted.com/talks/lang/eng/sebastian_seung.html) and the Human Connectome Project (http://www.humanconnectomeproject.org).


I second the recommendation of Seung's TED talk. A great watch.


Thanks espeed for the great info!


Geoffrey Hinton "Next Generation Neural Networks"

http://www.youtube.com/watch?v=AyzOUbkUf3M

-It is more biologically plausible then any other NN algorithm I've seen

-It results in creativity (in the video he has the computer "imagine the number 2")

-It pretty much explains why we need to sleep/dream. The network has to be run both forward (accepting sensory input) and backwards (generating simulated sensory input) in order to learn

-It emphasizes the point that the brain is NOT trying to do matrix multiply (or any other deterministic calculation) with random elements (if it was trying to be an analog computer it would be). The randomness is an essential part of the algorithm.


I agree. Hopfield networks, of which Hinton's Boltzmann machines are substantial elaborations, have many human-like properties:

-can fill-in details as a result of noisy or missing input -can sometimes "see" patterns in random noise


The question is fundamentally flawed, in that "data structure" is a concept used by programmers to communicate with computers (or between programmers).

A comparable question would be wondering whether the computer you are using right now, at this point in time, is inside a for loop, or a while loop, or it's just using tail recursion.

Sure some cognitive scientist can make my point more explicit, sorry that all I can only offer is a counterexample.


Yes, isn't our conception of data structures bound up fairly tightly with the available storage media and data buses? We have thought a lot about how to organize and retrieve data on tapes, and spinning disks, and random-access memories composed of discrete little bins, each storing a bit and addressed in rows and columns.

But we don't make extensive use of (say) analog computers, or computers with data buses having several million sub-channels. Hand us a machine employing such principles and it's back to square one. (Except for the lucky pure mathematician or two who got there first but whose work remains obscure right now, the way George Boole's work used to be obscure.)

And my stupid examples are just examples - I have no idea if the brain, or any bit of it, is best conceived of as an analog computer. Nobody knows what kind of computer the brain is like, except that it is almost entirely unlike the silicon-based digital computers that we build in the von Neumann tradition. And, presumably, when it comes time to discuss brain-based data structures they will turn out almost entirely unlike the structures in our digital-computer-data-structures textbooks.


Agreed. Another example is quantum computing.

I'm actually really surprised that no one has even mentioned the idea that the brain may be a quantum computer. Check out this Google Tech Talk entitled "Does an Explanation of Higher Brain Function require references to Quantum Mechanics" by Hartmut Neven. http://www.youtube.com/watch?v=4qAIPC7vG3Y


Short answer: A graph.

All data structures are simplified graphs.

The state of the physical universe is a massive graph in which interconnected objects are themselves massive assemblies of graphs of atoms and the atoms are graphs of subatomic particles. It's graphs all the way down. The properties of all systems - physical, chemical, economic, biological - emerge from the interactions between simple connected elements.

My opinion is that all knowledge is representable as a connected graph. The disconnect between our computers and our minds arises from the fact that brains are categorically not numerical machines but graph processing and pattern recognition engines. Neural networks are the underlying hardware and, with the typical elegance of nature, these are also graphs.

It should be possible to build a graph based language. The basic "Elements" [SICP] are easy to realise:

1. Primitive Expressions are graph nodes. They have identity and not much else.

2. Means of combination. Graphs can be added, subtracted etc.

3. Abstraction. A graph can be abstracted into a single node. We have no problem looking at a complex assembly of components as a single entity.

Since Google and Facebook are two massive platforms whose value arise from direct interaction with planet-scale graphs with billions of nodes, would these platforms be easier to build if our computers were more graph oriented? I would like to believe so.


And a graph is a relation, and a relation is a function, etc. Just because you can model everything with graphs doesn't make them special. You can model everything with lots of things.


A correction. A relation is not a function but a function is a relation. A function is a restriction of a relation such that for each thing_a in A and thing_b in B, for a pairing of (thing_a's,thing_b's) by some relation f,each thing_a can only be paired with one thing_b in B.


A relation between X and Y is a function X -> Y -> Bool.


Yes, sorry, you are right. You were talking about representable/modelled by - which I was conflating with Equivalent as Is. A subtle distinction I missed.


I believe we are considering an abstract data structure here.


It should be possible to build a graph based language.

Gremlin is a graph-based language (http://gremlin.tinkerpop.com/).


How do you account for the fundamentally non-deterministic nature of the universe?


Actually representing the entire state of the universe sounds rather ambitious. However, the very action of 'imaging' will force the entire system into a consistent state by quantum collapse. The universe will be represented but changed, the original multitude of super-imposed states will be lost.

Heisenberg's uncertainty principle implies that some things are just unknowable. However, only the most basic constituents of physical reality demonstrate noticeable quantum behaviour. The macroscopic universe is fairly deterministic. You don't need to jump in front of a moving bus to learn that Newtonian mechanics applies 100% of the time in a very deterministic way.


This may be a nitpick, but how can something be "fairly deterministic"? Is it possible for there to be degrees of determinism? I would consider determinism to binary, either something is deterministic, or it is not. If a thing cannot be demonstrated to be deterministic 100% of the time, then by it's very definition it is non-deterministic. By that logic, I would actually conclude that the entire universe does in fact behave deterministically. If it didn't, then I don't see how science would even be possible.


I was avoiding an absolute statement because macroscopic objects are entirely capable of behaving in unpredictable ways, however the odds against are so high that the chance of this happening is infinitesimal.


But Newtonian mechanics doesn't apply 100% of the time, right? It applies most of the time, from your perspective, because most of the sizes, distances and speeds you deal with are very large, short, and slow compared to their relevant universal constants. But there's a very long tail of very small, far, fast things that do concern you, and there, rarely, your "100%" approximation falls down.

You say that the "macroscopic" universe is fairly deterministic, meaning things that are about the same size as your brain. But is that an observation about the universe, or about your brain?


It's an observation about the universe. Yes, things about the size of his brain appear to to have deterministic behavior, but so do mountains, oceans, moons, planets and stars - which are, needless to say, nowhere near the size of his brain.

The appearance of "randomness" at very small scales can be explained as non-determinism, or as a deterministic effect of some property that we haven't yet detected.

The point being that the universe has not been shown to have a "fundamentally non-deterministic nature".


Sorry, but your brain (~10^-1m) is way, way closer in size to the Sun (~10^9m) than it is to the Planck length (~10^-35m).

Never mind that, though... So you have a general solution to the n-body problem? I'm being facetious, of course. The hardness of the n-body problem isn't necessarily an expression of fundamental randomness rather than technical uncertainty. But one way or another, aren't they both an expression of the same thing?

Modeling an n-body system in the physical universe exactly means modeling every piece of information in the universe. If you don't do that, the unknowns will multiply into significant divergence at some t, however distant. Quantum theory suggests that even if you did have a computer the size of the universe that didn't affect the universe, it is intrinsically impossible to make an accurate prediction.

I can't know where a particle will be in a second, or even if it will exist at all. I can't know if the Earth will be hit by an asteroid in a hundred years. I can't know who will win the Presidential election next year. The more accurately I model these systems, the more their outcome (or rather the outcome of the abstract macro-system of which I become aware) becomes dependent on the few things I don't know-- to the point that simply the act of checking the accuracy of the prediction has an unpredictable effect.

At that point, it seems like splitting hairs to say "Yes, but it's still really deterministic." What "real" are you talking about? Certainly none that I have experience with. But that doesn't matter either, because even granting that:

The universe might be deterministic, and it might be nondeterministic. It's unpredictable. So how does determinism become the default?


In advance, sorry about the late reply.

> The hardness of the n-body problem isn't necessarily an expression of fundamental randomness rather than technical uncertainty. But one way or another, aren't they both an expression of the same thing?

No, they are not. Randomness is randomness, and uncertainty is uncertainty. It's entirely possible to be have bounded uncertainty about the amount of randomness in a contrived system.

> The more accurately I model these systems, the more their outcome (or rather the outcome of the abstract macro-system of which I become aware) becomes dependent on the few things I don't know

No. The accuracy of your model of a system has no effect on what influences the system.

The uncertainty in the outcome of your model depends on the uncertainty in the inputs, but that's axiomatic.

> to the point that simply the act of checking the accuracy of the prediction has an unpredictable effect.

If and only if your checking mechanism is part of the system. Now, every checking mechanism we're likely to deal with is part of the universe, but that doesn't make the universe nondeterministic, merely impossible to isolate.

> Modeling an n-body system in the physical universe exactly means modeling every piece of information in the universe. If you don't do that, the unknowns will multiply into significant divergence at some t, however distant.

Yes, modelling a deterministic system requires modelling a deterministic system. If you model it except for the influence of some parts, your results will be what the model would have been in the absence of those parts.

If you don't model everything, you won't model everything. This is not an argument in favor of nondeterminism.

> Quantum theory suggests that even if you did have a computer the size of the universe that didn't affect the universe, it is intrinsically impossible to make an accurate prediction.

That is one interpretation; there are a number of interpretations of quantum theory that are deterministic.

> At that point, it seems like splitting hairs to say "Yes, but it's still really deterministic." What "real" are you talking about? Certainly none that I have experience with.

Consider a system with only two values, which we'll call "N" and "time" for the sake of my sanity. We can imagine a purely deterministic system where N = time * 1000. We can also have a nondeterministic system where N = Nprev + (1000 +- 1). If our measurement apparatus can only measure N's value relative to the previous value with an uncertainty of 10, it will be unable to distinguish between the two systems. This does not imply that the systems are the same, and it is not splitting hairs to consider them different. The first is "really" deterministic, and the second is "really" nondeterministic.

If our measurement apparatus improved such that we could measure with an uncertainty of .1, we would be able to establish that the first system was consistent with both nondeterminism and determinism, and the second system consistent only with nondeterminism. Nondeterminism can never be epistemologically ruled out; this does not mean we should conclude that it exists in reality.

> So how does determinism become the default?

Because for all the cases where we have enough data and processing power to test, determinism has been shown to be consistent with the data. If that were not the case, it wouldn't be the default.

Besides which, earlier you asked about the 'fundamentally non-deterministic nature of the universe'. My point is merely that no such nature has been demonstrated, nor is there any evidence to suggest it. There could be - if things often seemed to happen without cause, there would be plenty of evidence of cases where determinism is inconsistent with data. The closest we get is with quantum measurements, and it's far from demonstrated that these are genuinely a case of non-determinism.


First, non-determinism would have to be demonstrated.


The problem is, all the things we don't understand could be labed as non-determined (i.e: quantum particles moving "randomly")


I don't see how. "Not predictable given what I've observed so far" is a long way from "not predictable".


Wierd... because i think those are the same.


They only appear non-determined when you think of them as particles or waves. They're neither.


Explanation is fair enough. Moreover, it is comfortable to model it as a graph.


Along a similar subject, should we someday be able to replace sets of neurons in our brains with nanomachines (built to replicate neurons), would we be able to replicate these data structures? Or are they something intrinsic to our organic brain? Similarly, if we could build even a larger scale version of such external to the brain, could we interface it with our existing consciousness? (Could we then 'share' memories?)

Working backwards from this, could we build (today) a graph-based (or whatever-structure) recording device that will store data in much the same way we build memories? Such would obviously require a greater understanding of the interconnection of neurons and the storage of memory, as discussed elsewhere in this topic. [Small edit: Knowing the data structure is nice; being able to use it is golden.]

I personally believe the next great "hack" our species should embark upon is the brain and the body. We need to be more robust if we are ever to escape this rock. (And the dreamer in me wishes we could keep our consciousnesses and/or memories around eternally, but that introduces entirely different problems, and is probably an unrealistic ideal.)


A big part of the brain is the divide between conscious and unconscious. Your brain is constantly making random associations. If I read an article about Steve Jobs, I might remember something a friend said 10 years ago about him; and then I might remember that this friend lives in Brooklyn now; and then think about other people I know who live in Brooklyn. The brain just likes to make connections; perhaps synesthesia is an example of it getting slightly overworked.

I would say the unconscious part is basically the equivalent of a visiting a web page, Googling every term on that page, visiting those pages, and repeat nauseum (so like an inverted index perhaps on concepts/ideas/sensations rather than terms).

As for the conscious part, that's where the magic is. The unconscious brains generates breathtaking amounts of useless crap, but the conscious brain manages to filter it and do things like design software and make movies. I can't really speculate on how the conscious mind does this. Creativity is different than recall.

There is a feedback loop too. If your conscious mind starts ruminating on stuff, then the unconscious mind will generate more of it. We had a discussion about how writing down dreams causes you to produce/remember more of them. There is also the phenomena of playing a game like Tetris or Scrabble, and then your unconscious brain starts "rehearsing" all the moves in the background (sometimes against your will). It knows what you've been doing and just starts going off and making connections.

(If you are interested in this general subject, read "On Intelligence" by Jeff Hawkins. It will at least get you thinking and he has pretty fairly concrete ideas. He doesn't go into what I am writing about here, but as far a books that pertain to your question, I was reminded of it.)


On Monday, it starts a low-speed read of a big array off magtape. Tuesday through Thursday, it continues to read the array. On Friday, a random number is chosen, and the activity stored in that array element is performed Friday night, possibly setting the REGRET flag. Saturday is spent shuffling the array, while on Sunday, a high-speed write of the data is performed back to tape. At 12AM Monday, the brain executes a GOTO 10 instruction, and the process repeats.


I'd like to imagine the brain is a lot like the 'world wide web' as a data structure, than a pure graph.

That it isn't just a question of pure storage and retrieval. There's quite a bit of varied experience on the same root(& their storage) happening multiple times and layers within.

Content that gets shared/liked more, gets replicated, re-iterated on, transformed, re-tagged. As time progresses, you'll find more content similar to the parent, being generated -- re-experienced via dreams and the subconscious.

Eventually, when its necessary to dig out the piece of content using tags or searches, it'll end up finding the most 'linked-to' piece. Possibly one that was associated with a explosion of favorable chemicals.

-----

There's some evidence to suggest that the same experience, isn't stored as a single piece of memory. During the process of consolidation [when a long term memory gets etched], the 'experience' being transcribed goes through iterations. With variations being stored as well, some of them decaying almost instantly.

It's possible the brain applies 'instagram' like filters while etching these memories. (a process that happens over weeks)

It'll be interesting if in the future, we could modify/augment these 'filter' processes. Both at the storage and retrieval stages.

[http://pubs.acs.org/cen/coverstory/85/8536cover.html] [http://en.wikipedia.org/wiki/Engram_(neuropsychology)]


The most recent claim in this that I've heard about and on which people are willing to bet money is Numenta's "Hierarchical Temporal Memory" = "HTM". Jeff Hawkins (of Palm's Graffiti fame) is behind this and he also wrote a book called "On Intelligence" which discusses some of these ideas.

It looks like Numenta is making some steady progress in trying to commercialize its HTM technology as well.


Some observations I came across in a cognition class:

-> We can perform a visual lookup (identify the circle colored differently from these other circles in this group of circles) in O(1) time. Or if we are asked to locate a friend from an array of people (array fits in field-of-view), we can identify said friend in constant time

-> By nature, we classify objects based on how we use them. So a table and a chair have the same physical 4-legged, flat-top structure. But we differentiate them because we use them differently:

So, my guess :

-> A highly trained decision tree allowing us to perform classification of objects in our environment based on their use. (the training set is whatever is in our field of view and as such we are bombarded with large amounts of data). A hebbian-rule based ANN for training.

For dealing with visual stimulus at-least, I would bet that this is the model we are using right now.

Also, our classifier seems to be operating in parallel on all the objects available in our FOV.


It's worth noting that the brain changes as we learn - for e.g. when we learn maths the brain's structure is altered in discernable ways:

http://med.stanford.edu/ism/2011/june/math.html

So quite possibly the data storage used by a person with one particular upbrining/education would differ from a person with a different one.

In addition, different parts of the brain are likely storing data in different ways.


There has been done a lot of work on this issue specifically to the mental lexicon, of lexemes/entities of language, and its storage. A lot of it was based on psychological testing, for example reaction times when words are presented that are somehow associated etc.

I guess that, while on a neurological level storage of different types of memory might work similar, the 'data structure' is perhaps different.


Well, my bet is on it being something of a mix between bloom filter and a graph. i.e the interconnectedness of graph and the probabilistic nature of Bloom filter both would be a fundamental element. Ofcourse, this will make sense only as a simulation model of the black box that's our brain/neurons/neural pathways.


Interesting answers here, but from their diversity it seems to me that we in fact still don't know much about cognitive representations and their physical anchoring in the brain. I once worked in a lab, studying how the brain sees through the eyes, it wasn't close to explain how we know what is what we see.


I think of it as a concept graph . Nodes represent concepts and weighted edges navigable relations between concepts. Further one can associate excitation levels with concepts.

At any given instant there is a set of excited concepts. Then there is some algorithm for time evolution of the excitations.


Muscle memory, or engrams? How about: off CPU EEPROM with very slow burn in rate. The computation is handled remotely, or distally, but the computation only becomes overriding after a certain number of like computations. Can be unlearnt: erased.


So many different answers. We really don't know much about how our brain really encodes the data and we are so far away from actually reproducing it. Scary when you think all of us have a copy of this memory structure.




A brief summary for us non-chemist CS kids would be helpful. Thanks.


http://www.amazon.com/Emerging-Physics-Consciousness-Frontie... << Good luck finding a 'brief' summary. It's a lot of theoretical shit that isn't absolute yet but you CS kids would do well to take a tip from your mind and move beyond booleans and algorithms into multivalued logical systems and logarithms: www.fuzzytech.com/binaries/ieccd1.pdf

(I bet you can find a copy of the aforementioned book on http://library.nu as it contains a pretty wicked intro to many different paradigms regarding these matters)


This is an interesting topic which I have not found an adequate answer.

I wish we could someday know this and learn to control what we want to store for later use and what not to.


I believe it would be a graph, but there's also a "hashmap cache" with O(1) access for the most used nodes of the graph indexed by keywords or key-actions :)


The right response is that science doesn't know yet.


Well, yes. Science doesn't know, but that doesn't mean that scientists doesn't speculate, or that some of those speculations may at some point be shown to be more or less correct. Not knowing in a scientific way is a very interesting kind of "not knowing" :-)


A list (with less than 10 elements a time) to store easy things. I think only few people's brains can function like a graph.


They're all wrong! It's just a massive array of Strings. No wonder I take so long to recall anything.


There are multiple. I suppose long-term memory is a hashtable and working memory is a stack ;)


I think long term memory is more organized than a hash table. I base this on observation of my own brain during a bad LSD trip, where I seemed to be aware of how I was reaching decisions.

For example, if I looked at a door and wanted to remember what that door was for, I could "see" a vast 2 or 3 dimensional array of hundreds of doors that I had seen before, arranged so that similar doors were near each other, and then I was aware of some kind of focusing in on the region of door-space where doors similar to the target door resided, and then I was aware of some kind of comparison of each of the doors in that region serially with the door I was looking at to find a match, and then the data for the best match was made available to my normal consciousness.

During another part of that trip I was aware of audio processing. I'd hear someone talking and first hear it as unintelligible sound, then I'd be aware of the sounds being broken down into separate sound units, and then recognized as English words, and then the relationships between the words being recognized, and then I'd become aware of the meaning of what I'd heard.

I also had a time during that trip where I was watching visual processing. I'd be aware of how my mind was noticing things in the scene I was looking at, recognizing objects and remembering what they were, and combining that to build an understanding of the scene.

Now I have no way of knowing how much of this was just hallucinations shaped by my knowledge of computers, and how much really was the LSD actually letting me observe consciously mental processes that normally operate as black boxes to the conscious.

It's too bad research using LSD was greatly curtailed when the drug was banned. My suspicion is that what I perceived on that trip was a mix of reality and hallucinations driven by my computer knowledge. With enough experimentation, with people with different backgrounds observing and reporting, we could probably get some real insights into what is really going on in there.


I found LSD did not introduce things that weren't present already. Instead, it removed the filters that are necessary on a daily basis for me to get things done, and suddenly I was aware of and appreciating all the things that my subconscious normally sees and dismisses. The most vivid example of this was an awareness of the patterns and textures surrounding us all the time, as well as the relationship between spatial objects (e.g. a sign or a tree stood out as significant, rather than part of the "background").


http://www.bibliotecapleyades.net/ciencia/ciencia_morphic03a... << ctrl+f 'she went to siggraph', there's a lot to this sort of thing which is super interesting


I second this. Society shouldn't shy away from experimentation with LSD. Imagine what we can unravel.


If you look at it this way, then you would need to have a timeout element associated with each entry. In neuroscience, long term memories are formed by neural genesis in hippocampal region - meaning new synapses form upon reinforcement learning. These connections decay over time, which means you slowly start "forgetting" things. If the timout is a small value and there is no stimulus to increase it, we will quickly forget leading to short term memory.


Yes. Agreed. But it'll be better to think on a different line. Say, you take a situation and analyze how the brain manipulates.


http://www.ixinetwork.com/ ixinetwork.com is a technology blog for latest trends and Information about Gadgets, Software, reviews and everything else that matter of life.



I also try to make sense of this, but it is too complex, where do we store imagination?


Why are programmers so self-involved? The world is not a computer, and the human mind is not digital. It would be like a forum post on news.clockmakers.com asking, "Which wind-up clock spring does our brain use?"



checkout numenta.com


a neural network


Can you elaborate?


RC circuits or delay lines




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: