- The brain is more fuzzy, but it still stores the link between the smell of roses and the look of roses. Just as a probability, like a Bayes network. And you will fall for Illusions, just like a Bayes network can be wrong ("for a second I thought this was..." and than more information from the senses falsifies/corrects the probabilities).
- The brain is imperfect in reading from memory, but it still does. It just uses really good, lossy compression. It's loosing much detail, but often filling holes probabilistically. Besides neural nets, in computers this would be: defaults, recovery blocks etc. In part the good compression comes from an additional layer of abstraction, but computers can do this too. A very simple example are the the blurry color layers in jpg.
- We are better at recognizing than recalling, because a highly compressed memory is not enough to recreate the original, but has enough indicators to verify. This us very much like a checksum.
What there is, is the bias of the "hardware". Our brains are not good at deterministic iterations, computers struggle with the complexity of forming wisdom and fuzziness and feedback does not come naturally to them either. But in principle, we are both turing complete or something :)
Given the author's background, I'd put money on the idea that this is the result of years of frustration at having his life's work dismissively reduced by casual observers to something like "yeah, it's basically a computer" and thinking they therefore understand all the intricacies of the human brain. I can see how that might grate on a person in his field and trigger a response like this.
We know that the brain uses signals, and that these signals are composed of codes. We know that to interpret the world, certain information processing must take place. We are learning more about the exact nature of this processing, more about how neurons and even other fundamentals of the body code information and contribute to processing (e.g. chemical processes), but they definitely process information. We even know some of the codes.
On the other hand, if the author wants to argue about the nature of what is information, what is processing.. he's going to have a steep hill to climb.
From this we can often find things like:
- frequency of impulses increases as stimulus increases
- different numbers of neurons in a similar area fire simultaneously
Things like that. The wikipedia articles linked by wickedagain contain a good summary. I won't link specific sources, since there are thousands, but here's one for instance:
There in fig 6 for example you can see that the frequency of nerve firing increases in the tibial nerve as heat is applied to the paw. (Yes, a lot of information comes from horrible animal studies. It's part of the reason I didn't continue in this domain of work -- although rest assured, this technique is possible with humans, without harm.)
So, do impulses in the perceptual nervous system reflect how brains work, how we think? It's hard to say for sure, but it's clear that information is coded according to certain possibilities related to how neurons work, and this information must be combined and processed somehow to create what we call perceptions. Is it such a stretch to imagine that these perceptions become their own codes, abstract reductions of correlations of perceptual information, that are evaluated, compared, and combined in the much, much more nerve-dense central cortex to produce what we call "thought"?
A further frustration might be with computers themselves. I don't see why an introduction/exposition to computing should be something that needs pardon.
- The brain's network is not very well understood. For example, half your body's neurons are in your cerebellum. There people that don't have a cerebellum, and they only seem to walk later, and have balance issues . That is half of all your neurons, and until you do a scan, people can't tell you are missing them. Alternatively, we have Henry Molaison (known in literature as HM), who could not remember a thing from after his surgery to save his life. All the surgeons took was his hippocampus (and some other stuff) back in 1953, which we then found out was SUPER important for memory . He lived to be 82 and died in 2008, incidentally. Suffice to say, the neurons, their local network, and the long range network (aka, your whole body) are all very important in terms of how it all works. You can take out a ton, and not loose really anything, or you can take out some special ones and never remember a thing again. The brain is not 'fuzzy' and really 'fuzzy' all at the same time. It's like lumping all of China's politics into just their Chairman's ideas; you loose any sense of what is going on.
-The brain's recall system is also not well known. As with HM's life, you take out this special part, and it all goes to shit. However, when HM sang or played music, he could do an entire song or piece, even if it was Indagardadavida's length. Also, he remembered shocks to his hand when a surgeon put a joke buzzer in one day; he was hesitant to shake the next day. Our memory is not in any way good or lossy. Look at PTSD victims, they remember it ALL. Typically, it comes down to the motivations of the person. If you have to remember for the future, like an important fact for a test, or the sound a wolf makes right before it leaps onto you, then you will remember it very well. If you don't need to remember, then you typically do not. Again though, the brain is not well understood and we all have experienced exceptions to this.
-Some people are better at recognizing, some are better at recalling, and we don't know why. People that are face blind  are really bad at remembering faces; Brad Pitt says that he is face blind. Some people are really good with faces and really bad with names, some are the opposite. Some are synesthetes  and will recall equations or speeches by the taste that they experience when doing it. Some people are savants and can recall an entire phonebook and then draw it, nearly as a cc, from memory . Memory is a fascinating subject and whenever you think you know what is going on, something new opos up and kills the theory.
-Our brains can be very good at deterministic calculations. Any major league ballplayer is very good in the outfield (the D-backs excepted this year) and can very accurately catch a ball hit from 100s of feet away. They may not be good with math, but some of them are much better than any computer .
The brain is not well understood at this time. Calling it a computer is obvious nonsense, but it's a good working theory to approach from, especially considering we have no other real ideas.
The same thing could be said about 'intelligence'. Current definitions of intelligence are not abstract definitions(even though the word itself seems like some abstract term, similar to calculation, motion etc.), but rather are labels/descriptions of particular human traits. If a more abstract definition was offered, you would see a lot more things in nature 'becoming' intelligent. So, currently we are in a situation where 'intelligence' is defined in terms of human traits(like being able to speak human language, do human mathematics, drive cars etc), but then at the same time we're looking for intelligent aliens. You can't find 'intelligent' aliens if the term intelligence is defined in such a way to only ever apply to humans. It's really a parlor trick.
This is an expression of human desire to feel special - you don't want to be labeled as a computer or cluster of regular matter - rather you are an 'intelligent' soul(a different substance, not a regular matter) made in the image of the most powerful creature in the universe who happens to love you and care about you.
Agreed with the rest of what you wrote, but "intelligence" is more or less uniformly defined as the ability to quickly assimilate and apply information. What we try to measure is the speed, accuracy and complexity of that process. We have tried to measure the same process with dogs, parrots, monkeys, and ravens, and we even use comparable scales (e.g. general opinion is that adult dogs can handle the same mental complexity as toddlers).
You are correct that the tests we use incorporate human traits. But that's as much because we are mainly testing humans as because we use humans as our reference.
What these experiments with animals are designed for is to check which animals can do things in a similar way to humans.
Why are ravens intelligent? Is it because they can quickly process information? No, that is not what experiments are checking. They are labelled as intelligent because they can use tools to unlock their food.
Why are dolphins intelligent? Is it because they process complex information quickly? No, it's because they have large brains and exhibit empathy towards humans plus they seem to be able to communicate with each other.
People don't say "Well, that system can process complex information quickly, therefore it's intelligent". Instead, what you'll hear is "WOW, animal X can use tools, can communicate, organize in groups! They are more intelligent that we thought". Using tools or communicating is just one way of processing information. There are many other ways. For example, an animal might not be able to use tools in a cage to unlock their food, but they could be a terrific fighter in the jungle. Fighting in a jungle is a process that demands quick information processing, but those animals are somehow below in intelligence than revens or dogs. That is because people aren't using the definition you've provided.
> You are correct that the tests we use incorporate human traits. But that's as much because we are mainly testing humans as because we use humans as our reference.
Of course a mere year ago the fact that computers can't get good moves in Go was an argument. And all the arguments that came before that, voice recognition, reading, chess, games, even counting itself at one point. Of all those things that computers/"information processing" couldn't achieve we now know : computers have recognized more spoken text, read more books, letters, envelopes and ... than humans ever will. Computers have played more chess games, and certainly have processed more numbers than humans ever did or ever will. Humans probably still have a mild advantage in amount of Go processing, but it's clear that's not going to last much longer either.
This article takes another approach : because the type of processing that happens in our minds differs "so much" from what we do with computers, they must be fundamentally different. The emphasis lies on the human mind being different : humans don't work like computers, not the opposite. That wouldn't work : because computers do work like humans. That's why we build them, and use them. Computers analyse stocks, sell apples and cars and home insurance, they mail letters, they work out what a business should schedule tomorrow, they move their arms to glue soles onto shoes, ...
One might also analyse why computers are working so very differently. Why Von Neumann ? Well, because we don't really know ahead of time what we want computers to do, and even when we do, we want the ability to change it later. We want computers, to a large extent, and maybe even to 100%, to work like humans do, but we can't do that. As illustrated above, we get closer every year. Computers simulate other machines, that's what they do. So what you should compare is not instruction sets versus neurons, but neural network models those instruction sets simulate with biological neurons.
And then the differences melt away. Not fully. Not yet. We don't know how. Not fully. Not yet.
But more every year.
I mean shit, if I judge the distance between two points, how can it not be said that I have collected sensory data and computed the result?
All of his examples were pedantic or irrelevant.
Let's call it "ipython before computers", or "excel before computers" if you must.
In order to make any point remotely like this, he'd have to go talk to some actual neuroscientists, which he very plainly didn't.
To use computers as a simile is not so strange.
However, what we still lack completely is any kind of model for autonomy, i.e. how the brain decides what to "compute".
"We don't build machines in order to raise them and love them; we build them to get work done.
If the thing is even remotely close to "intelligent", you can no longer issue commands; you must explain yourself and ask for something and then it will misunderstand you. Normal for a person, pretty shitty for a machine. Humans have the sacred right to make mistakes. Machines should be working as designed."
A "child" that requires far less resources will be a product that will be incredibly popular. Not to mention an economic necessity.
And I would even say : 100 billion "small" AIs (scale vaguely comparable to a human mind) is a far preferable situation to one big AI. Both from a survivability standpoint and from an "oh my God it'll kill us all" standpoint.
> "We don't build machines in order to raise them and love them; we build them to get work done.
I would even say, if it's at all possible we'll do just that in the next 10-15 years. If we don't get advanced enough AI, perhaps 20-25 years.
Not a doubt in my mind.
(Except for Japanese of course. Excluding them from this question)
Seriously, fertility will become no issue if we prolong fertile period of our lives. Imagine having one kid at thirty, and another one at sixty.
The problem does not tend to be that it's physically impossible to get a child.
Next up, an ornithologist tries to argue that birds don't fly, because they have muscles instead of engines and nobody can pinpoint the ailerons or control stick.
What else could a brain possibly be doing? What do you think a brain responding to stimulus and changing as a result of it is, if not a form of information processing and storage? And why would that be fundamentally non-computational just because it didn't look much like how we might hand-write software to do it?
But there was a biologist that basically said what you're saying: what is then 'walking', don't you define it by a 'is doing'? And this objection misses the mark because 'walking' is an action, not an entity. On the other hand, the brain, presumed to be responsible for consciousness is not (not equal to) 'talking', 'walking', 'imagination', 'recognition'. These are faculties, but not the underling physical/ontological support that make these faculties possible.
The whole project of explaining consciousness must reveal the underlying substance, be it matter or something else, that make it possible, and the mechanism by which all faculties associated with consciousness arise.
And one more thing: the main conundrum of explaining consciousness is 'qualia', that which is experienced by being conscious, that which is it like to be conscious.
This would imply some of us are less conscious than others to some degree. However, I now see the crux of the issue you've discussed by asking, "How do I measure consciousness?" Is it binary or is it a gradient (a multi-dimensional gradient)?
I'd even argue that people who never ever ponder their existence aren't conscious at all. They simply execute their DNA, nothing more, nothing less. On the other side are the few whose DNA and upbringing have enabled them to act on themselves. I see this as the main difference between conscious and unconscious people: the former take in their surroundings and the voices in their head to draw certain conclusions based on which they set certain actions on said voices, whereas the latter is happy with the reward and punishment system nature has laid out in our brains.
Then of course, there's everything in between, mixed-and-matched wildly, leading to the gradient that is consciousness (quite literally a gradient, slipping off is very easy, climbing back onboard can be damn near impossible).
"Information" has a precise meaning. It is easy to design an experiment that proves information is being stored within yourself somewhere.
What form it takes hardly matters at all, if we can put bits in and get them back out later, information is being stored, and that is memory.
It's obvious that your immune system also stores information. Nobody would really question that -- it's a necessary precondition for vaccines to work. These are information-theoretic memories too. They are all that's required to show that the body computes.
There's something in the physical structure of my brain that encodes various data in a probabilistic, fuzzy, emotionally-connected way, and in a way that will be virtually impossible to identify and understand. The information's still stored.
The brain has storage for data. It just works differently, is lossy, and connects with other data for compression or reinforcement. This is still storage.
>But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
I have several memories.
>cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain
We can build artificial neural networks at the moment to recognise faces and no doubt if you fed them the right inputs, to recognise Beethoven and probably tell you which symphony was playing. You would also not 'find a copy' of the 5th in the neural network. It is likely that the human network of neurons works in a similar manner to the artificial neural network with the learnings stored in modifications to synapses and changes in memory values respectively. I mention this rather obvious stuff because the author goes on to suggest:
>Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept
On the basis of the above kind of semi arguments. The fact that the Human Brain Project may be a poor use of research funds is kind of a different issue.
>I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor.
If you think of anything we normally regard as intelligence such as playing chess it involves taking in some information such as where the pieces are and then doing something with it. How that is supposed to prove his point is a loss to me.
etc and on for dozens of other wrong implications and twisted logic. Life's too short...
(For a kind of counter argument check out the Recurrent Neural Networks and Inceptionism articles https://news.ycombinator.com/item?id=9584325 and https://news.ycombinator.com/item?id=9736598 for how eeriely close the behaviour of artificial neural networks can be to human ones)
What strikes me in the section of the article with the six metaphors is that each one is "true" at a certain level of abstraction, and as time progresses, the abstraction gets thinner and thinner and closer to reality.
The author's claims that I have the biggest problem with are that brains don't store or process information. We clearly do. Our behavior depends at least partly on past experience and our perceptions of current circumstances. When I remember something, do I assert the read and address lines on a NAND chip and latch the data into a register? No, certainly not. Do I trace a weighted graph of neural connections forming a fuzzy cloud of meaning? At the least, that's closer to what I'm doing, and I'd still classify it as information storage and retrieval.
The author doesn't seem like they're attacking the imprecision of a metaphor. They seem like they're rejecting the "brain as a computer" metaphor outright.
> "But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers"
We are born with most (if not) all of these things, and we develop more of them as we grow and learn. Sure, the nomenclature here was chosen for ridicule, but functionally those elements are present.
The brain is not a magical pudding that works in a completely occult and mysterious manner. While we do have a ways to go in deciphering the minutiae of its architecture and operation, the article engages in assertions that run completely counter to neurobiological facts we have already learned:
> "We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not."
This is at best a half-truthy strawman argument. Both our computer technology and the brain's architecture employ different mechanisms by virtue of their underlying implementation. However, the functions performed do converge if you look at what's actually being computed. There are entire fields of both CS and neuro research that are completely based on this overlap.
> "Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.
Humans, on the other hand, do not – never did, never will. "
This is utter bullshit. Humans totally operate on symbolic representations, decades of neuroscience has shown us that. Humans absolutely do store, process, and retrieve information. Humans possess without a doubt physical memories, just because they're not encoded byte-wise doesn't give you an excuse to make these garbage claims.
This is exactly a mapping from one computational calculus (e.g. "brain calculus") to another (e.g. binary logic) with a compiler. While it still might be that the physical processes governing brain functioning somehow elude computational description with digital logic (i.e. such a compiler can not exist. Not sure how this could be the case given the notions of universality by way of Turing machines attached to that), surely this would mean we'd have a new class of computational power on our hands.
And to say that brains are capable of more than Turing-complete (ignoring memory constraints) computation seems hard to believe (see also https://en.wikipedia.org/wiki/Digital_physics). Now how to factor true randomness (if it exists in our universe) into this I don't know, and maybe it doesn't even play a role...
Humans, on the other hand, do not – never did, never will."
In addition to neuroscience, going deeper to the atomic level and beyond, humans really do operate on symbolic representations of the world: representations that are stored as states of physical elements. And the algorithm that guides everything we do, would be the laws of physics.
In particular, we can bat it away by citing the fact there are humans who can recite pi to a large number of decimal places (proving, specifically, that they can store and retrieve digital data). And there are humans who can do long multiplication in their head, by following a series of procedures. Also, humans can store, retrieve, and communicate - with near perfect fidelity - image data (http://www.dailymail.co.uk/news/article-1223790/Autistic-art...)
What the specific molecular structure of the brain's representation of the experience and memory of Beethoven's 5th is almost certainly not stored in a single neuron, this hardly presents talented musicians from playing the 5th from memory.
Are all brains Turing complete?
I'm not convinced by your points. Savants are exceptionally uncommon, and there's no evidence most people can learn the skills they have. Many savants are famously bad at every day human skills, so clearly there's a trade-off - at best.
Storing and operating on numbers with arbitrary precision is a completely trivial operation for all but the oldest computers. But the abilities most humans find trivial - exploring an arbitrary environment by body movement (without falling over), throwing and catching things, playing sports, using tools creatively, reproducing and maintaining relationships, communicating using complex natural languages - are huge engineering problems in the digital space, and most aren't anywhere close to being solved definitively.
So let's ask again - how many of these problems can be solved using Turing complete digital systems?
I don't think anyone can honestly say "All of them." Given the state of the art, a more realistic answer is "We just don't know yet."
In the book "The Inner Game of Tennis" (fantastic read, more about the brain than tennis), the author explains how, during service, the other player has to respond before the first player has hit the ball, because of simple physics -- if the 2nd player waits for the ball to be hit then he absolutely doesn't have the time to do anything before the ball is on him. (That's probably also true of baseball, although I don't know anything about that).
And so, how does he do it? Nobody really knows, but the current thinking is that, through practice, the player that receives the ball interprets the movements of the hitter and induces where the ball will likely be (with great precision), without actually using much information from the ball itself.
It's possible that, just as elite chess players can play without a board, elite tennis players could play without balls.
In your example, like in my martial arts training, certain body patterns repeat before an action is going to happen. Intuition models likely consequence (eg trajectory). Another pattern says what response to take go that trajectory to get desirable result. Whole set of patterns nick-named muscle memory executes that thought into muscle impulses and controls them. The strike results.
Intuition 101. Marine Corp been using it for decades in realistic drills. Martial arts and other sports too. More realistic the prep, the more accurate and effective the mental model in intuitive brain.
When people compare a computer to a brain. They are not saying it's like a modern pc. They are saying it like a computer in a loose sense. A lot of people don't get that, they think were comparing to the thing on desk. A computer can mean lots of things with many different approaches.
"computers do store exact copies of data" - They don't have to? Just our current implementation does.
Saying "the brain is not a computer" is a shorthand for "the brain is not Turing equivalent".
Now, that could be true. But if it is, it would be the first time anybody has found a physical system in our universe that can't be simulated by a Turing machine. It seems like quite a stretch.
The brain has input, goes through various connections, and you end up with a result.
Anyway, before we stray too far from the point I was hoping to make, I'll claim that when most people use the word "compute" they mean a particular task performed at the behest of a human being. This sort of definition is problematic if we're ever to attempt to build an artificial brain or to encounter intelligent alien life radically different from our own.
It all falls back to the problem of agency and the philosophical concept of the self.
I'd argue genetic code (messenger RNA) is a program and inputting genes and outputting proteins is a form of computation.
BTW, I don't believe in free will. You mentioned earlier that there needs to be some magic somewhere and I don't think there is magic. Occam's razor and all that...
Maybe that's a defining characteristic - complexity.
Moreover, if you include the various checksum algorithms from different vendors as part of "storing", then even if you perform a bit-for-bit copy between different disks, they are not stored exactly the same.
Any quantum randomness in the brain can be simulated because we know how to generate quantum randomness, then just feed that into a deterministic physics emulator.
To deny that requires new laws of physics.
Essentially, all models are wrong...but some are useful.
> The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.
Premise 1, in my reading, is not reasonable. It would be fallacious to assert that all computers are capable of "behaving intelligently", because we don't have a definition for intelligence, let alone a "universal" intelligence that would be consistent between things, or in any case how an individual thing's intelligence would be tested.
The syllogism is the core of the author's contrarian argument. If we can't prove Premise #1 from the preceding syllogism, then the core argument of this essay does seem to lose its legs.
Really, a single viewpoint/facet/argument about the brain-as-a-computer metaphor should undoubtedly be insufficient.
More useful would be a set of arguments that compare or contrast how a computer and a human brain work. We can use these to coalesce an understanding of how close the metaphor is to reality.
How do the two receive information? How do the two store information, recall information, delete information, update stored processes, etc? All of these perspectives begin to build a whole-picture of the relationship, which would seem more useful that trying to boil it down to a single statement.
I think the author needs to revisit his basic assumptions and walk through the argument again, because I don't think what's laid out in the essay actually supports his conclusions, so confidently espoused in the introduction.
1. The computational theory of mind is the only remotely plausible theory of mind.
2. A remotely plausible theory is better than none at all.
We know we literally process symbols because I can literally read a book, and I can literally write something down.
"Jerry Fodor is my favorite philosopher.
I think that Jerry Fodor is wrong about nearly everything.
...My goal is that this book is for non-representational, embodied, ecological psychology what Fodor’s The Language of Thought (1975) was for rationalist, computational psychology."
Looks like there's little skepticism about the power of neural networks here, even if some of the arguments are framed in terms of computationalism. https://en.wikipedia.org/wiki/Connectionism#Connectionism_vs...
Unless you summarize Chemero's critique, though, I can't really respond to it.
Fodor didn't claim that connectionist models couldn't encode symbolic manipulation, just that the pertinent activity from the perspective of "thought" is symbolic manipulation. So he (I believe) would have said, maybe the connectionist can explain it or maybe he can't and who cares.
He did care that "concepts" were not statistical entities. They were "atomic" and basically tokens for the language of thought. So, Jerry argued, the concept of "Lion" could not be complex i.e. composed of cat, claws, teeth etc.
I don't consider myself either for or against Fodor's argument but I think the summary does great injustice to his arguments.
Fodor does claim that, for what he describes as language, systematicity and compositionality are essential features. However, the "evidence" he cites isn't from a study. It is primarily from facts about language.
To use one of his favorite examples, take these sentences:
The cat ate the rat.
The rat ate the cat.
He takes those facts to be uncontroversial, and he says that that is what he means by systematicity and compositionality.
This "ability comes in clusters" bit is very confusing, and I am not sure what he means by it.
Fodor doesn't care that connectionists don't have good models for symbolic manipulation. He says that connectionists models are only good in so far as they reduce to symbolic manipulation because symbolic manipulation are the only models we have that demonstrate compositionality and systematicity.
Right, but Chemero's point is that that premise is not so empirically grounded; it is an a priori assumption.
I am not familiar with this literature, but it's ultimately the same point that he makes against Chomsky's poverty of the stimulus argument (the literature on which I know much better): that it's not an empirically grounded premise, and the evidence for such an a priori argument is weak.
It's a computer more advanced by evolution than we've built through our understanding of science.
Or to put it another way, if you really think that isn't computing, you need to argue that this new piece of hardware also isn't computing:
Your second assertion is simply wrong. Asserting that what the brain does isn't the same as what a computer does simply doesn't imply that a computer designed to mimic some aspects of biological neural nets is not a computer.
This kind of phrasing implies that you aren't hearing what I'm saying. You're still picturing physical bits that can be twiddled. I'm not making an analogy to any physical bits at all, I'm not even making an analogy.
I'm talking about bits in the information-theoretic sense. If I can send a signal to a person and as a result that person can do better than random at picking an intended symbol out of a set of symbols, then bits of information were conveyed.
If the person can do the same thing at a later point in time, then bits of information were also stored.
No analogies are required, this is literally the definition of "information" and "bits" since 1948.
It not being a conventional implementation doesn't mean what it's doing isn't computation.
The evidence I presented of alternative models of computation, neurons, and implementations in those models is here:
While I understand the frustration with the whole Kurzweil camp, this article reads like it was written by an angry high schooler. It also throws the baby out with the bathwater, in that there are clearly valuable analogies to be drawn from computing, as others have mentioned in this thread. My two biggest disagreements:
- "Your brain does not process information": This statement is too broad. How do I do mental math?
- "Memory is not encoded in neurons [as evidenced by a crude drawing of a dollar bill]": Where then does the crude drawing come from?
Thank you for making this point. I am completely unable to follow the author's logic
The brain, like the computer are both pattern recognizing feedback loops which uses input to simulate phenomena.
The computer is still evolving. The human brain much less so.
Humans evolved from animals, why should computers not be able to evolve from humans.
Why would anybody demand that there has to be an internal mental representation of a ball to catch it?
The article mentioned radical enactivism from Anthony Chemero. Another person who's written a lot from this perspective is Dan Hutto: https://uow.academia.edu/DanielDHutto
Here for example is an article describing "Remembering without Stored Contents" https://www.academia.edu/6799100/Remembering_without_Stored_...
and he has a book: Radicalizing Enactivism: Basic Minds without Content. First chapter:
But for a more general, less radical background on this topic, see also work by Rafael Nunez and others:
The Embodied Mind: Cognitive Science and Human Experience
George Lakoff & Rafael Nunez: Where Mathematics Come From: How The Embodied Mind Brings Mathematics Into Being
George Lakoff & Mark Johnson: Philosophy in the Flesh
Mark Johnson: The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason
And there are now around 30-40 books on this topic, at least.
Of course brain wetware is not computer hardware, but
Minds run on wetware as computation runs on hardware.
He claims we don't use algorithms to catch balls but then describes the heuristic we do use algorithmically.
He attacks von Neumann's nascent speculations that the mind can be usefully modelled digitally but ignores von Neumann's proposal for an alternative type of computation, indeterminate & probabalistic - from the same book The Computer & the Brain (1958).
The inability of most people to draw from memory as accurately as from life is evidence of compatmentalisation not a lack of storage of mental images - as drawing from memory is a skill that can be aquired.
Then he goes all in by proposing hard AI as fact: "Reasonable premise #1: all computers are capable of behaving intelligently"
No doubt minds and brains are a deeper mystery than silicon and software but the author fails to demonstrate why or propose how.
IMHO the Roboticist Rodney Brooks was onto something when he proposed intelligence must be embodied - AI will come from interacting with the world.
The author cites Chemero's work Radical Embodied Cognitive Sciencr which is very very well argued and interesting.
What he forgets is that information processor is a metaphor also in respect to the machines we have on our desks. They don't really process information - they just route some electrons around. But metaphors are useful.
- Good models are important
While the traditional computer model is good at explaining behavior, it doesn't help us ask new questions. And no matter how well what we currently know brain fits the traditional computer model, if it can't be used to inspire new questions then we need to explore new models. Feynman explains this better [https://youtu.be/NM-zWTU7X-k]. Even though our finite non-magic brains could fit in a computer, a computer isn't the best metaphor.
- Computers don't act like brains by themselves.
In any FUZZY LOGIC compiled and ran on a computer, all your code is translated into functions and data. LOSSY COMPRESSION is actually the application of experiments on perception, and not an inherent property of computers. A jpg is designed to look the same while using less space, but the information from it is not integrated in the same way as a human. Our MEMORY is not like a computer's. It is shockingly constructive, something that isn't expected with the computer model.
- New programming paradigms might be better models.
My beef with the metaphor is that is has very separate memory and computation. I don't think the brain "reads and writes from memory". I think it creates and updates a mesh of functions. Something like FUNCTIONAL PROGRAMMING could be used to model this. What if instead of looking for a neural hard drive, we treat each neuron as a function? What if generation of a dollar bill is actually OPTIMIZATION of our recognition function? What if we are a PIPELINE of functions, each feeding into the next from retinas to occipital lobe to thalamus to motor cortex to muscles? I would rather see new ideas coming from us than holding on to an outdated metaphor.
To continue with the IP metaphor that the author decries, it's like we only have crude wireframes and physics models in our heads, and we project the stimuli we receive to those models as best we can in realtime in order to create our conscious experience. Perhaps the sparsity of the "models" our brains contain is what allows to adapt/reuse them to comprehend such a broad array of phenomena.
The disorientation that people can experience during sensory deprivation or even extreme social isolation is perhaps another indication that the coherence of our conscious experience is so linked to our real-time environmental stimuli.
This all makes one wonder whether we could make better intelligent agents by somehow measuring the "coherence" of an agent's experience and turn that into a positive reinforcement learning signal.
"The term radical embodied cognition is from Andy Clark, who defines it as follows: Thesis of Radical Embodied Cognition[:] Structured, symbolic, representational, and computational views of cognition are mistaken. Embodied cognition is best studied by means of noncomputational and nonrepresentational ideas and explanatory schemes, involving, e.g., the tools of Dynamical Systems theory."
"...antirepresentationalism (which implies anticomputationalism) is the core of radical embodied cognitive science."
There's a long discussion of Randall Beer's 2003 paper "The Dynamics of Active Categorical Perception in an Evolved Model Agent" that uses a continuous time, real-valued neural network (CTRNN). https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/...
"Using the model of the CTRNN alone, one can only tell how an instantaneous input will affect a previously inactive network. But because the network is recurrent, the effect of any instantaneous input to the network will be largely determined by the network’s background activity when the input arrives, and that background activity will be determined by a series of prior inputs. This model of the CTRNN, in other words, is informative only if one knows what flow of prior inputs to the neural network typically precedes (and so determines the typical background activity for) a given input. The impact of the visual stimulus is determined by prior stimuli and the behavioral response to those prior stimuli. The model of the CTRNN is useful, that is, only when
combined with the models of the whole coupled system and the agent–environment dynamics. These three dynamical systems compose a single tripartite model. ...The models also show that the agent’s "knowledge" does not reside in its evolved nervous system. The ability to categorize the object as a circle or a diamond requires temporally extended movement on the part of the agent, and that movement is driven by the nature and location of the object as well as the nervous system. To do justice to the knowledge, one must describe the agent’s brain, body, and environment. Notice that none of these dynamical models refers to representations in the CTRNN in explaining the agent’s behavior. The explanation is of what the agent does (and might do), not of how it represents the world. This variety of explanation—of the agent acting in the environment and not of the agent as representer—is a common feature of dynamical modeling..."
My knowledge of how computers work is fairly shallow and my knowledge of neuroscience is basically none. With that caveat: this answer https://www.quora.com/Is-the-human-brain-analog-or-digital best fits my biases/intuitions. TLDR: The brain has a digital aspect -- neurons fire and send a signal or they do not -- and an analog aspect -- whether or not a neuron fires depends on the chemical(?) conditions in which it's embedded and inside the cell.
This makes sense to me. That being said, I'm not sure whether I actually disagree with the author. If anyone is trying to find the brain's equivalent of a CPU, that's a fool's errand. It also seems likely that digitizing a brain is not possible, certainly not feasible. Digitizing analog systems is always(?) lossy -- the lossiness just doesn't matter with things like music due to the limitations in human hearing.
This doesn't mean we can't create artificial brains, it just means that they can't be digital machines.
My personal, bad, metaphor for memories is that they work somewhat like linked lists. It's easy enough to recite a poem start to finish. Harder to recite it if you start in the middle. Really hard to recite it backwards. Is this true? Who knows. Does it tell us anything about the underlying mechanisms? Not really.
There are certainly odd things in this article. What I think the author wants to get at:
We don't store specific information like "on a dollar bill, there's a line going from x of the left side to y of the bottom side with a curvature of z" in a specific container labeled "info about dollar bills", to be read out everytime you interact with one.
Instead, each time we pay using such a bill, the chain of neurons firing from the moment you open your purse to the moment you get your receipt, as a whole, is responsible for your internal representaion of a dollar bill: (1) first glimpse of sheets of paper in purse (2) fumble indivual sheets to examine more closely (3) recognize key elements like green colour, rectangular, famous bust in the center (4) in case you didn't, now you know you've got cash in there (5) remember the total, do some arithmetic to figure out which bills to hand over (6) extract bills from purse (7) extend hand to hand cash over to cashier (8) wait for receipt/pack things (9) get receipt (10) purchase complete.
This entire chain (which is obviously incomplete for the sake of demonstration), with all the completely unrelated stuff about rummaging in a purse or human interaction, comes together to form what you think of as a dollar bill. This is how we store information, by relating it to stuff we already know.
Which is why it makes sense that babies come into this world with only the bare minimum. We don't pop out ready made, that's the whole point of childhood and adolescence. We build ourselves, unconsciously incorporating our surroundings into our personality, by constantly relating to what we know, comparing new things to old things. This implies that a faulty impression early on has disastrous consequences in whatever is built afterwards, meaning most of it.
(edit) just came up with a nice formulation: the brain doesn't store raw data as in pixels or decimal values, it stores characteristics and rebuilds the thing you're thinking about as well as your medium of expression allows it to (which is why you can always think about something, but when asked to draw or describe it, it just seems impossible). This explanation is far from perfect, and I don't understand how our notions of numbers for example comes to be, but seeing as maths is really just a load of charateristics giving intersting results when combined, I see no problem.
Most importantly, there's nothing in here that directly argues against the information processing metaphor. The author gives a bad argument that he thinks lies behind the metaphor and shows why that argument is bad. But a) I don't recognize this argument as what motivates the metaphor and b) to show that one argument for a conclusion fails is not to show that the conclusion is false.
The author often points to differences between us and actual computers. No one who employs the information processing metaphor is claiming that we are identical to human-made computers. Rather, the claim is that there's some abstract sense of 'information processing' under which both human brains and computers are implementations. In order for both of those things to be implementations, they need not do so with the exact same kinds of mechanisms, nor must they have the exact same kinds of abilities. (This is why I find the discussion of the dollar drawing case so strange - no one is claiming that we store visual representations of objects in the way that an actual human-made computer does.)
When the author writes:
"To catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms."
What he describes is not a non-algorithm. Insofar as it's a procedure that takes in certain inputs and follows a series of steps in order to achieve some goal - it just is an algorithm, albeit a somewhat simple one.
More importantly, the author seems to misunderstand what is meant when researchers say that the brain 'runs' an algorithm. Of course you don't consciously crunch numbers when your visual system is chugging away, trying to analyze and sort through the...data...it's getting from the retina. But the lack of conscious computation is not evidence for the complete lack of computation.
There are all sorts of legitimate criticisms of cognitive science - and the author discusses some of these at the end of the piece without really understanding/explaining what's going on behind them. But the author says nothing convincing that it's the failure of the information processing metaphor that explains the limitations/failures of cognitive science.
Certainly, people misuse/misunderstand the information-processing metaphor/claim. (Though the people who do this tend not to be cognitjve scientists.) But that's not to say that the metaphor is complete garbage.
It's just not computer in the way you think a computer works.
In The New York Times was February 3, 1853, an obituary stated: "Mr. Walker was widely known as an accomplished Astronomer and a skillful Computer."
So to claim that the very thing which enables us to be a computer, is not a computer, might simply be short-sighted or trying to be absurd.
These types of computers implement mathematical functions that process signals from the real-world in their native form in real-time. Each component implements primitive functions that act on electrical input. They're usually arranged in connected components that flow from one thing to another or have feedback loops. They have no memory but can emulate it by generating on the fly. Doesn't that sound awfully familiar? ;)
I've been saying for years that brain is not digital and is a computer. Past year or two learning hardware just solidifies it more. Just look at it's properties to instantly see analog effects:
There's models for general-purpose, analog computers with prototypes built, analog models of brain functions, and analog implementations of neural networks. All show that the brain might actually be a general-purpose, analog (or mixed) computer. Moreover, it seems to be a self-modifying, analog machine with some emergent properties starting at childhood in its early phase. The complexity of this thing and re-creating any one brain's function would be mindboggling as author correctly noted.
So, the brain is not a digital computer. It's an analog or mixed-signal computer with massive redundancy and ability to reconfigure itself. The descriptions of inconsistencies with digital match up nicely to model approximations in an analog style. Those issues even contributed to switch to digital for better accuracy/precision. Now, they're going to have to switch back if they want to match the greatest computer ever made. :)
I'll leave you with examples of analog and neural.
Note: The above, my favorite as geometrically closer to brains, is a 60 million synapse system that took 294,912 cores to simulate. That's what a handful of analog chips are doing in real-time. Behold the power! :)
Note: Siegelmann writes on non-Turing models for computation focusing on analog and neural. Her lab is all over the theoretical aspects of this stuff.
Note: Shows how many of these things are mathematical relationships. Other references exploit that given it's what analog implements.
Note: A few were interested in simulating actual brain structures with memristors. Above is the latest take on that in Russia.
So, there you all go. It's a computer. It's an analog computer. Also, ends the parallel debates about whether you can have a general-purpose, analog computer or whether it can top digital. A few, analog computers working as a team... invented... digital computers. QED. :)
is the result of this article the conclusion that computers are just boring, uncreative, overly logical brains? Who woulda thunk?
A baby’s vision is blurry, but it pays special attention to faces
The latter phrase reflects a well established and important developmental fact. I'm very curious about the first phrase. Is this also an established fact in the science of human development? On the face of it, it seems like an extraordinary piece of research if that's true. One cannot, of course, simply ask a baby if their vision is blurry. Their visual acuity needs to be inferred, and some cleverness would be required to do this reliably and accurately.
Anyone know which study this refers to? (Or is this an example of why this article is pretty darn fluffy?)
EDIT: Arrgh! This article is ignorant fluff!
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
The brain can contain information. That's simply an evident fact. Does the author understand what information means?
The human nervous system also embodies algorithms. That's also a fact established by neuro-biology. The visual system is full of algorithms. I think the author doesn't understand what these words mean. All of the other words seem to be thrown in there to evoke images of machine parts that don't look like people parts. It's exactly the cargo-cult misunderstanding of the underlying principles, in mistaken deference to resemblances.
Also, anyone who's delved into Rubik's Cube solving knows that we humans can develop and embody algorithms. (The scene plays fast and loose with the term, but they do use algorithms in a mathematical sense.) This author loses a whole lot of academic credibility for writing this, and the editors of this website do as well for publishing this. This is just rank ignorance on the level of publishing a perpetual motion machine article. It's 2016. If you are publishing a factual article of interest to intellectuals, and you don't understand what information or an algorithm is, you have to more business writing, editing, and publishing such an article than if you had no idea of what the periodic table means.
Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.
As a traditional musician who can play several hundred melodies from memory, I can say this is complete BS. He's simply stupid playing games with terminology. What I know from my experience and that of many fellow musicians, is that we do have to retrieve melodies. In fact, sometimes we have to take a few minutes to make sure we have all parts of the melody retrieved. You can go out to pubs in SF and see traditional musicians sit there and do this. Sometimes the right cue needs to occur for the complete retrieval to happen. Some musicians have certain melodies they can only remember after they remember the name.
After taking a look at the paper, take a look at this post by the Google deep learning team. Take a look especially at the images.
I think most if not all of the author's arguments that the brain is not a computer, can also be applied to Google's deep learning (which obviously runs on computers).
I think the author is very naive about how broad a computer is and what a computer can do. Deep learning is obviously running on a computer, but like the author states for the brain, any single image or fact is probably not encoded in a single place.
In addition, we have probabilistic data structures, for example a bloom filter. Would a computer running a bloom filter to recognize data it had seen before not be a computer because it did not store an complete representation of any one item?
What we are seeing now, is that as computers go beyond just storing and retrieving data, to actually doing stuff like speech/image/pattern recognition, the representation is becoming more and more like the brain representation.
In addition, the author ignores that the brain did indeed develop a way encode information exactly. If the author would like more information about this, I suggest they look up "drawing" and "writing" in wikipedia. With drawing and especially with writing, the brain developed a way to encode information exactly that could later be retrieved. In addition, it could be retrieved by other brains (if they know the language).
Also, the author proposes straw man arguments for the other side. I am somewhat familiar with both computers and brains ( I trained in Neurosurgery, but work now as a programmer), and I never met a professional who claimed that memories are stored in individual neurons! I was always taught that memories are encoded in the connections between neurons - which is very similar to how neural networks encode information ( as the strength of connections between elements of the neural network ).
It is also interesting that he quotes Kandel. When I actually read Principles of Neural Science (by Kandel and Schwartz - http://www.amazon.com/Principles-Neural-Science-Eric-Kandel/...), I got the distinct impression that Kandel actually viewed the brain as an information processor, with lower level information being processed into higher level representations (see for example the chapter on the visual system).
So overall, the author takes a very limited view of both computers and brain and concludes (I think falsely) that the brain is not a computer.
It is trivially true that the brain processes information - that is not much of a conclusion, it is a starting point, and it certainly does not mean that it is like a digital computer.
> "Face It, Your Brain Is a Computer"
I think it fits as a counterpoint.
I find it ironic that one of the things the author claims humans don't have is memories. Memory is a concept that arose to describe an aspect of human experience thousands of years before computers existed. Using the term "memory" to describe information storage in computers is indeed an analogy but it's an analogy in the opposite direction from what the author claims.