Really...the brain doesn’t create representations of visual stimuli or store memories? Under what possible definitions of those words can this statement be sensical?
Surely the author believes that visual stimuli cause measurable changes in brain state, and that people can indeed remember past visual stimuli. Then how is it true that brains don’t create representations of visual stimuli and store and retrieve them? I’m at a loss here.
Perhaps the author means that the brain doesn’t do these things in the same way as digital electronic computers we’re familiar with. That’s certainly the case at the most basic level.
So there isn't "a representation" in the discrete sense. It's more like the entire system changes, and it's impossible to physically scan specific elements of it to retrieve selected content.
You can trigger selective recall, but you're triggering a complex and noisy process which generates an experience that may include remembered elements - not pulling out a predictable bit pattern.
There isn't an exact equivalent in CS. Traditional binary memory is obviously nothing like human memory. Neural nets have some superficial similarities, but they lack generality.
I'm not completely convinced by the argument, but I'm glad someone is making it.
The problem with it is that we can remember specific discrete facts quite easily. If you ask me how many flats the key of F major has, I can tell you without being distracted by other memories.
What we don't know is how that fact is represented, how exactly my brain changed after I learned it, how similar those changes would be to changes in other brains learning the same fact, whether everyone has similar subjective experiences on recall, or how to scan someone's brain to check whether or not the fact is known.
And of course, why wouldn't the brain store a visual representation of what you remembered. That would be the easiest way to store and retrieve it, which is why we do that on computers as well.
Because it isn't how brains work. Recollection is re-experience.
The article you link says researchers taught a model how to match patterns to letters, given the presumption that they are letters, for a single subject's MRIs taken while they were experiencing the sight of words and letters. Not at all the same as saying a brain stores data.
We don't record the memory, we make it easier to feel and think the memory again. Like muscles adapting to exercise, our brains adapt to experience. Keep it up long enough and we get good at it.
While as for the kinds of memories we easily do corrupt upon reading (was the perpetrator tall or blonde?) there may be no such data to draw from. Or worse, generalised "data" which is good for pattern matching but not for actual recall. (The stuff prejudice is made of.)
If so, then there is no contradiction or problem, just a very intricate mesh of data.
(Also I have an idea which is not even a hunch, but pulled out of thin air, that our "logic" and "memory" are much more intermixed in our wetware than in computers.)
When I say "what is 9 times 9" your brain activates all the pathways (probably trained in childhood) that lead to you thinking "81" in a similar way to Pavlov's dog.
If recall is possible, then there is a representation there. It's obvious that there is a representation of Beethoven's 5th Symphony on the brain of somebody that can play it. It's just a convoluted, distributed and encoded in some completely crazy signal space. Yet, if it wasn't there, the person wouldn't be able to play it.
You may think these are dumb points for the author to make, but it’s not clear to me at all that the media or the VCs who buy the hype about machine learning, AI, and self-driving cars realize just how different they are.
No, it's incorrect to use "digital computer" here. More correct might be a von Neumann architecture computer. But then that shows that the author is attacking a strawman: people comparing brains to computers aren't limiting themselves to such an architecture.
We have plenty of Harvard architecture computers (eg. DSPs), and there are plenty of other computational architectures (DNA computing, optical computing, quantum computing).
The fact that the article's example were of a specific type of computer architecture, and had nothing specific to do with digital computers per se despite the article claiming it does, just proves my point that the argument is flawed.
Otherwise what's the point in using an analogy?
"We don't create representations of visual stimuli"..."We don't retrieve information or images or word from memory registers" Neither do computers in many cases. It's as if the author is saying because in the brain isn't a tape recorder or film camera then it doesn't work like a computer. Nope. Studies show that much like a computer we encode information as we store it. Because we encode it fancy ( or weird :-) doesn't mean its not encoded or retrieved. The Dollar Bill example is a red herring.
Just ppitballing here: What if, instead of creating an image for the dollar, the subject's mind created a visual'thumbnail' and 'hash' of the real dollar. The thumbnail she can recall easily and lets her draw 'enough' of the bill to be vaguely recognizable. The hash, lets her recognize the real deal whenever she sees it. She simply compares the hash of this new object with list of hashes, and if its a dollar the mind finds a match. Of course, it far more sophisticated, but these simple methods, cleanly account for what the author observed.
They repeat throughout that it does
> and that people can indeed remember past visual stimuli.
They agree also
>Then how is it true that brains don’t create representations of visual stimuli and store and retrieve them?
>Perhaps the author means that the brain doesn’t do these things in the same way as digital electronic computers we’re familiar
The author is definitely in agreement with the second statement from what I understand. However, I think where people are getting lost is that they expect the author to resolve definitions for "cognitive"/information processing terms (because in our day and age, they are identical) when actually the author is rhetorically refusing them validity, and on purpose. Hence why in those places where the author is expected to supply an equivalently robust counter-definition they instead pose a far more general definition, such as, with respect to learning, "the brain changes". The name for this rhetorical strategy is "refusing the blackmail of the Enlightenment".
I don't expect the brain to work like any computer we've ever built (which seems to be the point of view this writer is attacking), but I do expect that it has the capacity to store, retrieve, and process information and so the computer analogy seems useful.
The author concludes by asking "Given this reality, why do so many scientists talk about our mental life as if we were computers?" He offers no support for the proposition that this view is common, and I suspect he is often taking, as literal, speech that was intended to be metaphorical.
I suspect this misunderstanding of "information" is the core of the confusion. He needs to revisit physics and learn some computer science, because information and physics are inextricably intertwined, so the brain very much operates on information using rules.
Edit: and further, the brain is a finite state automaton due to the Bekenstein Bound, a physics theorem.
I don't doubt his expertise on the brain side, but he characterises the computer in a very limited way, almost perfectly suited to make his argument.
With the reconstruction of the dollar there are plenty of examples how computers need not store entire instances of a thing to be able to recognise it later, such as applications of hash functions or doing spam filtering.
This reads like a piece written by someone who heard a neuroscientist take issue with the "brain as computer" metaphor, but didn't quite grasp what it was all about.
The "brain is not a computer" meme has to do with the fact that the brain does not process information in the same way as a digital computer. It is not saying that the brain is not a symbol-processing/computational system.
The author is almost making it seem like models are reality and that people think that. They're not and I don't think anyone has ever thought they were...
Further and like other comments already mentioned, the brain is thought of and treated as a turing machine, not a digital computer. It's done this way, because the brain can be mapped to the definition of a turing machine.
And I have to defend Von Neumann. In his book, he explored turing equivalencies between the brain and computer concepts at the time used to implement the digital turing machine, he didn't actually think that the brain was a one-to-one mapping to a digital computer... He knew the difference between models and reality.
Even for the history of models the author mentions (hydraulics, automata, etc.), these all contain some turing equivalencies if implemented correctly and they were simply using the language and examples at the time to express this.
The author also continues to mangle any and all ideas of modeling, abstraction, and equivalence throughout the whole article. With regard to his 'uniqueness problem', I mean 'information loss' is modeled digitally for a reason.. just because humans are lossy, doesn't mean we can't model them that way. Think of a compressed image file.
I don't think there's a single researcher worth their salt that thinks the 'IP Metaphor' is gospel. That is just a grossly unscientific idea to assume.
We're all free to choose any model or collection of models we wish to approximate reality, but some of them work better than others and the brain is a complicated thing to model.
The author is trying to dramatize a triviality.
The author is arguing that when there is a "monopoly" of models with respect to a given domain, like the brain, that inexorably tends to make the conceptual distinction between model/reality irrelevant. The author then goes on to cite examples of this, not only with respect to our current age's infatuation with the IP model, but _previous_ ages own repetition of this with respect to their own guiding technological framework (hydraulic engineering and the humors, etc)
Academic science isn't an apolitical, free space. We are not all free to choose any model, and what it means for a model to "work better" comes down to evaluative criteria that are almost always baked into a particular set of theoretical assumptions.
that's where you're wrong. Way too many people, many of them engineers, consider models to represent reality, and that's a real, big problem, because this is deeply linked to how they consider science.
Most of the comments I’ve read don’t like the article but almost all of the commenters I’ve read don’t seem to have studied this issue. It gives me the impression that these are visceral reactions. The article is not an article for experts. It’s expository in nature.
One thing that stood out for me was this quote:
The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.
If true this seems to me (very much a non-expert) to give serious doubt to the notion that the brain is a computer.
A small aside, even simulating a small collection of quantum particles fully is enormously taxing with current computers and adding more particles increases complexity beyond just a linear increase. But this is a mathematical exercise.
Now, it's possible that the human brain depends on some law of physics that is not computable (possible to simulate on a computer), but given the level of study that had gone into neurons, along with the temperature of the brain vs the energy ranges we've examined with colliders, it seems super unlikely.
If it helps, Turing machines with n-dimensional tapes have been proven equivalent to the basic Turing machine.
This is not the equivocation that it may appear to be, as it establishes a sort of asymptotic boundary between what is possible and what is not (the more memory we have, the closer we can get to it.) It also means, for example, that we don't have to wonder if there is one computer instruction set or architecture that can perform computations that are impossible by another (again, up to having sufficient memory to complete it.)
The author of the claim you are questioning has not, so far, returned to explain what he means, but I think he is saying that the brain is Turing-equivalent in the informal sense given above: we can compute like a Turing machine, up to the available tape/memory (though with a very limited tape, if we are not writing things down...)
If that is so then I (one of the people here criticizing the article) must say that I don't think it is relevant. An alternative interpretation of the statement, that it says it has been shown that that there is a Turing machine equivalent to the human brain, would seem to depend on believing (as I happen to) that the brain's functioning is a matter of electro-biochemistry that could, in principle, be simulated by a computer, but no-one, so far, has given a demonstration, or even a convincing explanation, of how that works at a Turing-machine level of abstraction.
With regard to the quote you offer: I think it is a simple case of rhetorical overreach -- one might need to know the entire history of that brain to fully understand everything there is to know about its current state, but that does not mean that, absent that full history, the state is meaningless. In understanding what a person is thinking, what they remember (which is an aspect of their brain's state) is more important than what actually happened.
I gathered that the quote I referenced means that the state of a brain at time t is not sufficient to reconstruct memories or other meaningful information. The fundamental point of contention between you and others criticizing the article appears to be that you all believe that there is a storage mechanism in the brain in a similar (analogous?) fashion as a computer. I gather the author claims this is not so. Information is not stored in neurons in such a way that one “retrieves” it by accessing a storage location.
I don’t know enough about this stuff to intelligently comment on the veracity of it. I just know that someone far more knowledgeable than me and just about everyone else commenting says that our intuition about how this stuff works is wrong. That alone is worth causing me to reconsider my intuition on this stuff.
The author is saying our brains do not function like our digital computers, something I think we do all agree on. It is not so clear how the author thinks our brains do work, but he apparently wants us to stop using computer metaphors when discussing their function.
He would have this prohibition extend to the notion that our brains store and retrieve information, which is absurd; one might as well argue that a computer is not a Turing-equivalent device because RAM is not a tape. The author says that scientists will never find copies of words or grammatical rules in the brain, and if, by copies, he means coded in something like UTF-8, then that is, of course, true, but beside the point: if his brain did not have some mechanism that supports the storage and retrieval of this information in some manner, how was he able to write the article in the first place? He claims you won't find copies of Beethoven's 5th. symphony in a brain, but I suspect that at least Beethoven himself, and many conductors of the piece, have had just that - and the soloists who play his piano concertos are not reading from a score, so where does that come from?
I think the author may have ended up making these absurd claims because he is trying to use the trivial brain-does-not-function-like-a-computer argument to prove something that is just an unargued-for intuition: he doesn't seem to think RAM (or perhaps any form of physical information store) could possibly be the foundation for something that works like human memory. He is apparently unaware of the extent that software such as neural networks (or even relational databases) have already extended the concept of information storage and retrieval beyond the simple model of randomly-addressable bytes (which does not, of course, make the point that human memory is like a computer's; what it does show is that the author's low-level comparisons are insufficient to make the larger point he is trying to squeeze out of them.)
I don’t know enough to understand how that is possible or why someone knowledgeable about this stuff thinks this. I have basically the same conception of the brain and how it works as you do. But I’m confronted with the fact that a person far more knowledgeable than me thinks otherwise. It is that fact that causes me to persist in my view with caution. The author may be a crank. I don’t know.
Yeah. It also puts Quantum Mechanics into serious doubt...
So, I'll wait for better evidence than some random person on the internet thinking it feels right.
I see from another comment that you disagree with the author. I’m guessing you aren’t an expert in this area either. Do you have more than visceral reasons to doubt the author’s conclusion? The author’s credentials seem to imply that he has thought about these issues far more that you or I. He presumably knows much more about how the brain works than you do. Why do you think your few minutes of thought on this article are enough to discount what he says and his conclusions? Isn’t that a bit arrogant?
That kind of claim requires a clear and reproducible effect, and confirmation of many people that they looked and found the it. That's the kind of claim that if I did an experiment and got the result myself, I wouldn't still trust it.
The quote I made certainly goes against my intuition. But I’ve not studied the issue and don’t know enough about it to trust my intuition. Apparently though there are people who have studied this at the level of an expert and they believe it. At the very least this indicates that possibly my intuition is wrong and that I shouldn’t be so ready to discount the article.
It sounds like because the author doesn't understand how neurons create a representation of reality, they're splitting hairs and saying it doesn't.
But when it comes to the individual notes, their sequence had damn well better be literally correct for the entire performance. If not, someone in the audience will certainly notice that one flubbed note.
So in learning the work, "she was changed in some way" all right. As some members of the audience had been ... identically. And that 'some way' certainly resembles pulling bytes out of 'storage'.
I actually like this as an idea that our tools for understanding brain functions are still too primitive and the traditional comment base compute models are lacking.
And that's the issue, isn't it? I think he's using "retrieve" to mean something much closer to what a digital computer does. Ie, there's a single "place" in the brain where the song is compressed and stored. My definition of "retrieve" (and yours, I think) is implicitly more relaxed; I say that a rough distributed system that reacts to stimuli like songs by being able to approximately reproduce them later counts as "retrieval."
As other commenters have noted, the article uses a very restrictive definition of computation/retrieval. I mean, earlier in the article, he gives a definition of the same game that the "Lifelong Learning" and "Reinforcement Learning" people use.
I think he's actually on the same page as many learning theorists, and is just trying to make it clear to a general audience that a very "tight" match between the brain and Von Neumann machines isn't reasonable.
Computational models are not an exception to this.
There is not even a single "part" or "function" of the brain that we fully, exhaustively understand through a computational explanation. All claims of certainty are premature.
What's really fascinating and really needs the attention of historians and anthropologists is why in this current historical moment so many STEM educated people who are otherwise very bright end up confused about this. Maybe the answer is obvious though.
The model of what something does is implemented by an underlying mechanism. But for many reasons the mechanism doesn't have to be, and often isn't, a naive translation of the model.
What does he mean by "neural structure" here and how is it different from "memory" and "representations" which supposedly we don't have?
Quite ironically, I think his line of thought shows precisely why the brain is probably quit like a computer. The algorithm going on in his brain was probably like this:
1. Assuming I'm like a computer leads to negative emotions (because lack of free will and reduction in self-esteem it implies).
2. Therefore give high weights to facts contradicting this, and low weights to facts supporting this.
3. For a range of subjects regarding the behavior of the brain, do:
3.1. If the subject feels like it's logically supporting my view on the subject, add it to the article.
3.1.1. Anything I know the brain does and I have no clue on how can a computer do, will automatically feel like it supports my conclusion. Since I'm pretty clueless as to how computers work in general, most things are actually going to seem like something a computer can't do.
3.2. Otherwise, ignore this and keep going on to the next example.
But the brain and the computer are both pattern recognizing feedback loop one just isn't as developed yet.
The computer doesn't see the image but neither do I. We simulate it.
> The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’)
The author should have read https://en.wikipedia.org/wiki/Analog_computer