
The brain is not a computer - dkucinskas
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
======
anotheryou
I don't see it.

\- The brain is more fuzzy, but it still stores the link between the smell of
roses and the look of roses. Just as a probability, like a Bayes network. And
you will fall for Illusions, just like a Bayes network can be wrong ("for a
second I thought this was..." and than more information from the senses
falsifies/corrects the probabilities).

\- The brain is imperfect in reading from memory, but it still does. It just
uses really good, lossy compression. It's loosing much detail, but often
filling holes probabilistically. Besides neural nets, in computers this would
be: defaults, recovery blocks etc. In part the good compression comes from an
additional layer of abstraction, but computers can do this too. A very simple
example are the the blurry color layers in jpg.

\- We are better at recognizing than recalling, because a highly compressed
memory is not enough to recreate the original, but has enough indicators to
verify. This us very much like a checksum.

What there is, is the bias of the "hardware". Our brains are not good at
deterministic iterations, computers struggle with the complexity of forming
wisdom and fuzziness and feedback does not come naturally to them either. But
in principle, we are both turing complete or something :)

~~~
alexwebb2
Agreed. This article struck me as exceedingly pedantic.

Given the author's background, I'd put money on the idea that this is the
result of years of frustration at having his life's work dismissively reduced
by casual observers to something like "yeah, it's basically a computer" and
thinking they therefore understand all the intricacies of the human brain. I
can see how that might grate on a person in his field and trigger a response
like this.

~~~
radarsat1
It's not even pedantic, it's just wrong. He's ignoring years of neuroscience.

We _know_ that the brain uses signals, and that these signals are composed of
codes. We _know_ that to interpret the world, certain information processing
must take place. We are learning more about the exact nature of this
processing, more about how neurons and even other fundamentals of the body
code information and contribute to processing (e.g. chemical processes), but
they definitely _process information_. We even know some of the codes.

On the other hand, if the author wants to argue about the nature of what is
information, what is processing.. he's going to have a steep hill to climb.

~~~
karmajunkie
Got a few citations on that? Neurons react to stimuli, that's certainly well-
established. But codes?

~~~
radarsat1
Well, afaik we can't easily record individual neurons _inside_ the brain,
although there are some studies towards that goal. Most brain studies work on
areas of activity. Even MRI is not that precise, to be frank. But for studying
human perception, a technique often used is called microneurography, which
involves sticking a probe into the nerve fibers along a known neural pathway
and recording the electrical stimulations.

From this we can often find things like:

\- frequency of impulses increases as stimulus increases

\- different numbers of neurons in a similar area fire simultaneously

Things like that. The wikipedia articles linked by wickedagain contain a good
summary. I won't link specific sources, since there are thousands, but here's
one for instance:

[http://jn.physiology.org/content/85/4/1561](http://jn.physiology.org/content/85/4/1561)

There in fig 6 for example you can see that the frequency of nerve firing
increases in the tibial nerve as heat is applied to the paw. (Yes, a lot of
information comes from horrible animal studies. It's part of the reason I
didn't continue in this domain of work -- although rest assured, this
technique is possible with humans, without harm.)

So, do impulses in the perceptual nervous system reflect how brains work, how
we think? It's hard to say for sure, but it's clear that information is coded
according to certain possibilities related to how neurons work, and this
information must be combined and processed somehow to create what we call
perceptions. Is it such a stretch to imagine that these perceptions become
their own codes, abstract reductions of correlations of perceptual
information, that are evaluated, compared, and combined in the much, much more
nerve-dense central cortex to produce what we call "thought"?

------
Freaky
> Your brain does not process information, retrieve knowledge or store
> memories.

Next up, an ornithologist tries to argue that birds don't fly, because they
have muscles instead of engines and nobody can pinpoint the ailerons or
control stick.

What else could a brain possibly be doing? What do you think a brain
responding to stimulus and changing as a result of it _is_ , if not a form of
information processing and storage? And why would that be fundamentally non-
computational just because it didn't look much like how we might hand-write
software to do it?

~~~
cristianpascu
The day before this article appeared, I participated to a talk about
consciousness, and the presented dismissed the functional approaches to
explaining consciousness. That is, if consciousness processes information, it
is not a information processor LIKE the computer is. You can not define a
thing by what it does, but only what it is. The electron is not something that
moves through space, but something that has mass, charge, and so on. That is,
you define it by its intrinsic properties. Otherwise, you could say
consciousness is what made possible this comment here. That'd tell you
absolutely nothing.

But there was a biologist that basically said what you're saying: what is then
'walking', don't you define it by a 'is doing'? And this objection misses the
mark because 'walking' is an action, not an entity. On the other hand, the
brain, presumed to be responsible for consciousness is not (not equal to)
'talking', 'walking', 'imagination', 'recognition'. These are faculties, but
not the underling physical/ontological support that make these faculties
possible.

The whole project of explaining consciousness must reveal the underlying
substance, be it matter or something else, that make it possible, and the
mechanism by which all faculties associated with consciousness arise.

And one more thing: the main conundrum of explaining consciousness is
'qualia', that which is experienced by being conscious, that which is it like
to be conscious.

~~~
tkahnoski
This triggered a thought in my head, that homo sapiens are basically "better"
consciousness machines where as a chimpanzee or dolphin a slightly less
powerful conscious machine, and so on down to a cockroach which is a tiny pre-
programmed micro-controller.

This would imply some of us are less conscious than others to some degree.
However, I now see the crux of the issue you've discussed by asking, "How do I
measure consciousness?" Is it binary or is it a gradient (a multi-dimensional
gradient)?

~~~
scalio
I would say a person who rarely stops to think about what it all means, what
it's for, any kind of existential question, is not as conscious as somebody
who reflects daily on wether or not they've done a good job at not becoming a
little bit more insane today.

I'd even argue that people who never ever ponder their existence aren't
conscious at all. They simply execute their DNA, nothing more, nothing less.
On the other side are the few whose DNA and upbringing have enabled them to
act on themselves. I see this as the main difference between conscious and
unconscious people: the former take in their surroundings and the voices in
their head to draw certain conclusions based on which they set certain actions
on said voices, whereas the latter is happy with the reward and punishment
system nature has laid out in our brains.

Then of course, there's everything in between, mixed-and-matched wildly,
leading to the gradient that is consciousness (quite literally a gradient,
slipping off is very easy, climbing back onboard can be damn near impossible).

------
tim333
My the article is misargued mush. It's hard to analyse all 4000 words of it
but to pick on a few points:

>But here is what we are not born with: information, data, rules, software,
knowledge, lexicons, representations, algorithms, programs, models, memories,
images, processors, subroutines, encoders, decoders, symbols, or buffers –
design elements that allow digital computers to behave somewhat intelligently.
Not only are we not born with such things, we also don’t develop them – ever.

I have several memories.

>cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in
the brain

We can build artificial neural networks at the moment to recognise faces and
no doubt if you fed them the right inputs, to recognise Beethoven and probably
tell you which symphony was playing. You would also not 'find a copy' of the
5th in the neural network. It is likely that the human network of neurons
works in a similar manner to the artificial neural network with the learnings
stored in modifications to synapses and changes in memory values respectively.
I mention this rather obvious stuff because the author goes on to suggest:

>Meanwhile, vast sums of money are being raised for brain research, based in
some cases on faulty ideas and promises that cannot be kept

On the basis of the above kind of semi arguments. The fact that the Human
Brain Project may be a poor use of research funds is kind of a different
issue.

>I challenged researchers there to account for intelligent human behaviour
without reference to any aspect of the IP metaphor.

If you think of anything we normally regard as intelligence such as playing
chess it involves taking in some information such as where the pieces are and
then doing something with it. How that is supposed to prove his point is a
loss to me.

etc and on for dozens of other wrong implications and twisted logic. Life's
too short...

(For a kind of counter argument check out the Recurrent Neural Networks and
Inceptionism articles
[https://news.ycombinator.com/item?id=9584325](https://news.ycombinator.com/item?id=9584325)
and
[https://news.ycombinator.com/item?id=9736598](https://news.ycombinator.com/item?id=9736598)
for how eeriely close the behaviour of artificial neural networks can be to
human ones)

~~~
khedoros
I found it odd how he noted rooting and nursing instincts in infants, then
said that we aren't born with rules or software. That seems self-
contradictory.

~~~
rdiddly
He's saying "rules" and "software" are just metaphors to attempt to explain
these instincts. They are a way of saying it.

~~~
khedoros
Well, yes, but he's saying that "rules" and "software" are _incorrect_
metaphors, and that they are a way of saying it _wrong_. I think that's an
important distinction.

What strikes me in the section of the article with the six metaphors is that
each one is "true" at a certain level of abstraction, and as time progresses,
the abstraction gets thinner and thinner and closer to reality.

The author's claims that I have the biggest problem with are that brains don't
store or process information. We clearly do. Our behavior depends at least
partly on past experience and our perceptions of current circumstances. When I
remember something, do I assert the read and address lines on a NAND chip and
latch the data into a register? No, certainly not. Do I trace a weighted graph
of neural connections forming a fuzzy cloud of meaning? At the least, that's
closer to what I'm doing, and I'd still classify it as information storage and
retrieval.

The author doesn't seem like they're attacking the imprecision of a metaphor.
They seem like they're rejecting the "brain as a computer" metaphor outright.

------
Udo
This article falls somewhere between a strawman argument and a deliberate
misrepresentation of scientific understanding.

> " _But here is what we are not born with: information, data, rules,
> software, knowledge, lexicons, representations, algorithms, programs,
> models, memories, images, processors, subroutines, encoders, decoders,
> symbols, or buffers_ "

We are born with most (if not) all of these things, and we develop more of
them as we grow and learn. Sure, the nomenclature here was chosen for
ridicule, but functionally those elements are present.

The brain is not a magical pudding that works in a completely occult and
mysterious manner. While we do have a ways to go in deciphering the minutiae
of its architecture and operation, the article engages in assertions that run
completely counter to neurobiological facts we have already learned:

> " _We don’t store words or the rules that tell us how to manipulate them. We
> don’t create representations of visual stimuli, store them in a short-term
> memory buffer, and then transfer the representation into a long-term memory
> device. We don’t retrieve information or images or words from memory
> registers. Computers do all of these things, but organisms do not._ "

This is at best a half-truthy strawman argument. Both our computer technology
and the brain's architecture employ different mechanisms by virtue of their
underlying implementation. However, the functions performed do converge if you
look at what's actually being computed. There are entire fields of both CS and
neuro research that are completely based on this overlap.

> " _Forgive me for this introduction to computing, but I need to be clear:
> computers really do operate on symbolic representations of the world. They
> really store and retrieve. They really process. They really have physical
> memories. They really are guided in everything they do, without exception,
> by algorithms.

Humans, on the other hand, do not – never did, never will. _"

This is utter bullshit. Humans totally operate on symbolic representations,
decades of neuroscience has shown us that. Humans absolutely do store,
process, and retrieve information. Humans possess without a doubt physical
memories, just because they're not encoded byte-wise doesn't give you an
excuse to make these garbage claims.

~~~
RGamma
To add to the fallacy that to think that simply because computations by common
computers are done with a bits-and-bytes approach hardware-wise and processes
in the brain are not that computers wouldn't be able to express computation
that comes very close to processes in the brain:

This is exactly a mapping from one computational calculus (e.g. "brain
calculus") to another (e.g. binary logic) with a compiler. While it still
might be that the physical processes governing brain functioning somehow elude
computational description with digital logic (i.e. such a compiler can not
exist. Not sure how this could be the case given the notions of universality
by way of Turing machines attached to that), surely this would mean we'd have
a new class of computational power on our hands.

And to say that brains are capable of more than Turing-complete (ignoring
memory constraints) computation seems hard to believe (see also
[https://en.wikipedia.org/wiki/Digital_physics](https://en.wikipedia.org/wiki/Digital_physics)).
Now how to factor true randomness (if it exists in our universe) into this I
don't know, and maybe it doesn't even play a role...

~~~
nickpsecurity
We do. Always have. See my main comment to OP on analog computation and its
models that were used for NN's. Hint: One wafer (maybe 40 chips) of analog
neurals takes almost 400,000 digital cores to emulate.

------
dekhn
This article is filled with falsehoods, and poorly reasoned/incorrect
arguments.

In particular, we can bat it away by citing the fact there are humans who can
recite pi to a large number of decimal places (proving, specifically, that
they can store and retrieve digital data). And there are humans who can do
long multiplication in their head, by following a series of procedures. Also,
humans can store, retrieve, and communicate - with near perfect fidelity -
image data ([http://www.dailymail.co.uk/news/article-1223790/Autistic-
art...](http://www.dailymail.co.uk/news/article-1223790/Autistic-artist-
draws-18ft-picture-New-York-skyline-memory.html))

What the specific molecular structure of the brain's representation of the
experience and memory of Beethoven's 5th is almost certainly not stored in a
single neuron, this hardly presents talented musicians from playing the 5th
from memory.

~~~
TheOtherHobbes
Are brains Turing complete?

Are _all_ brains Turing complete?

I'm not convinced by your points. Savants are exceptionally uncommon, and
there's no evidence most people can learn the skills they have. Many savants
are famously bad at every day human skills, so clearly there's a trade-off -
at best.

Storing and operating on numbers with arbitrary precision is a completely
trivial operation for all but the oldest computers. But the abilities most
humans find trivial - exploring an arbitrary environment by body movement
(without falling over), throwing and catching things, playing sports, using
tools creatively, reproducing and maintaining relationships, communicating
using complex natural languages - are huge engineering problems in the digital
space, and most aren't anywhere close to being solved definitively.

So let's ask again - how many of these problems can be solved using Turing
complete digital systems?

I don't think anyone can honestly say "All of them." Given the state of the
art, a more realistic answer is "We just don't know yet."

------
bambax
> _to catch the ball, the player simply needs to keep moving in a way that
> keeps the ball in a constant visual relationship with respect to home plate
> and the surrounding scenery_

In the book "The Inner Game of Tennis" (fantastic read, more about the brain
than tennis), the author explains how, during service, the other player has to
respond _before_ the first player has hit the ball, because of simple physics
-- if the 2nd player waits for the ball to be hit then he absolutely doesn't
have the time to do anything before the ball is on him. (That's probably also
true of baseball, although I don't know anything about that).

And so, how does he do it? Nobody really knows, but the current thinking is
that, through practice, the player that receives the ball interprets the
movements of the hitter and induces where the ball will likely be (with great
precision), without actually using much information from the ball itself.

It's possible that, just as elite chess players can play without a board,
elite tennis players could play without balls.

~~~
nickpsecurity
Years of research in intuition shows exactly what's happening. That part of
the brain builds models between senses and actions. Spots patterns. Turns them
into reflexes and instincts. It's not always accurate or rational but operates
continuously, instantly, and usually effectively.

In your example, like in my martial arts training, certain body patterns
repeat before an action is going to happen. Intuition models likely
consequence (eg trajectory). Another pattern says what response to take go
that trajectory to get desirable result. Whole set of patterns nick-named
muscle memory executes that thought into muscle impulses and controls them.
The strike results.

Intuition 101. Marine Corp been using it for decades in realistic drills.
Martial arts and other sports too. More realistic the prep, the more accurate
and effective the mental model in intuitive brain.

------
UK-AL
He has a very limited definition of a "computer". His description is how
modern computers work. But that's not strictly what a computer could be.

When people compare a computer to a brain. They are not saying it's like a
modern pc. They are saying it like a computer in a loose sense. A lot of
people don't get that, they think were comparing to the thing on desk. A
computer can mean lots of things with many different approaches.

For example "computers do store exact copies of data" \- They don't have to?
Just our current implementation does.

~~~
criddell
Some people forget that there's more than one computer architecture. A
computer doesn't even have to be digital.

~~~
TheOtherHobbes
But that's expanding the definition to "Arbitrary processing architecture we
haven't even invented yet which really could simulate a human brain, if only
we knew what it was and how to build it - which we don't, but that's just an
engineering problem."

~~~
UK-AL
A computer is something that computes. Doesn't really matter how it does it.

~~~
chongli
And what does it mean for something to compute?

~~~
UK-AL
Being able to follow an algorithm, a set of steps(parallel or in series) to
reach a result.

The brain has input, goes through various connections, and you end up with a
result.

~~~
chongli
So is fire a computer? It goes through a series of steps to reach a result.

~~~
UK-AL
Is fire able to follow an algorithm? I mean some creative person could
probably make a computer out of using fire.

~~~
chongli
Well, what is an algorithm?

Anyway, before we stray too far from the point I was hoping to make, I'll
claim that when most people use the word "compute" they mean a particular task
performed at the behest of a human being. This sort of definition is
problematic if we're ever to attempt to build an artificial brain or to
encounter intelligent alien life radically different from our own.

It all falls back to the problem of agency and the philosophical concept of
the self.

~~~
criddell
I think that's a way too restrictive definition of computation.

I'd argue genetic code (messenger RNA) is a program and inputting genes and
outputting proteins is a form of computation.

~~~
chongli
So with that definition, how is fire not computation?

~~~
criddell
Where is the computation in fire?

BTW, I don't believe in free will. You mentioned earlier that there needs to
be some magic somewhere and I don't think there is magic. Occam's razor and
all that...

~~~
chongli
Fire is a process that converts chemical inputs into outputs, producing light
and heat. It may be involve simpler reactions than protein synthesis but why
does that matter?

~~~
criddell
> why does that matter?

Maybe that's a defining characteristic - complexity.

------
karmajunkie
I find it somewhat amusing that all of the top comments on a story about how
the human brain is not a computer seek to analogize the computer to the brain
in order to prove that the brain is in fact a computer, while actual experts
on the brain have gone into detail on exactly why that isn't the case.
Dunning-Kruger much?

~~~
panglott
I'm just reminded of Kuhn: "To the extent that two scientific schools disagree
about what is a problem and what a solution, they will inevitably talk through
each other when debating the relative merits of their respective paradigms.
...Both are looking at the world, and what they look at has not changed. But
in some areas they see different things, and they see them in different
relations one to the other. That is why a law that cannot even be demonstrated
to one group of scientists may occasionally seem intuitively obvious to
another."

------
brudgers

      Essentially, all models are wrong...but some are useful.
        --George Box
    

[https://en.wikipedia.org/wiki/George_E._P._Box](https://en.wikipedia.org/wiki/George_E._P._Box)

~~~
MacsHeadroom
Right. A brain is like a computer, so far as that is a useful model for
discovery. A brain is not like a conventional "computer", so far as accepting
this model precludes deeper understanding of brain functions that do not fit
the model very well.

------
mattlutze
This bit bothered me, on Epstein's Information Processing metaphor:

 _> The faulty logic of the IP metaphor is easy enough to state. It is based
on a faulty syllogism – one with two reasonable premises and a faulty
conclusion. Reasonable premise #1: all computers are capable of behaving
intelligently. Reasonable premise #2: all computers are information
processors. Faulty conclusion: all entities that are capable of behaving
intelligently are information processors._

Premise 1, in my reading, is not reasonable. It would be fallacious to assert
that all computers are capable of "behaving intelligently", because we don't
have a definition for intelligence, let alone a "universal" intelligence that
would be consistent between things, or in any case how an individual thing's
intelligence would be tested.

The syllogism is the core of the author's contrarian argument. If we can't
prove Premise #1 from the preceding syllogism, then the core argument of this
essay does seem to lose its legs.

Really, a single viewpoint/facet/argument about the brain-as-a-computer
metaphor should undoubtedly be insufficient.

More useful would be a set of arguments that compare or contrast how a
computer and a human brain work. We can use these to coalesce an understanding
of how close the metaphor is to reality.

How do the two receive information? How do the two store information, recall
information, delete information, update stored processes, etc? All of these
perspectives begin to build a whole-picture of the relationship, which would
seem more useful that trying to boil it down to a single statement.

I think the author needs to revisit his basic assumptions and walk through the
argument again, because I don't think what's laid out in the essay actually
supports his conclusions, so confidently espoused in the introduction.

------
projectramo
I will quote the great philosopher of mind, Jerry Fodor:

1\. The computational theory of mind is the only remotely plausible theory of
mind.

2\. A remotely plausible theory is better than none at all.

We know we literally process symbols because I can literally read a book, and
I can literally write something down.

~~~
panglott
I'm looking up Anthony Chemero's book "Radical Embodied Cognitive Science",
and the first sentence of the preface is:

"Jerry Fodor is my favorite philosopher. I think that Jerry Fodor is wrong
about nearly everything.

...My goal is that this book is for non-representational, embodied, ecological
psychology what Fodor’s The Language of Thought (1975) was for rationalist,
computational psychology."

~~~
projectramo
Jerry Fodor himself was fond of saying, of his ideas with Ernie Lepore,
"Absolutely no one agrees with us, which leads me to believe we might be
right."

~~~
panglott
Chemero's book has a long section criticizing Fodor's reasoning in arguing
that connectional networks are not a good model of human thinking.

Looks like there's little skepticism about the power of neural networks here,
even if some of the arguments are framed in terms of computationalism.
[https://en.wikipedia.org/wiki/Connectionism#Connectionism_vs...](https://en.wikipedia.org/wiki/Connectionism#Connectionism_vs._computationalism_debate)

~~~
projectramo
I was studying Neural Networks in grad school (advisor was statistician from
Bell Labs) at Rutgers and took a bunch of classes with Jerry, so I had these
debates on a daily basis.

Unless you summarize Chemero's critique, though, I can't really respond to it.

Fodor didn't claim that connectionist models couldn't encode symbolic
manipulation, just that the pertinent activity from the perspective of
"thought" is symbolic manipulation. So he (I believe) would have said, maybe
the connectionist can explain it or maybe he can't and who cares.

He did care that "concepts" were not statistical entities. They were "atomic"
and basically tokens for the language of thought. So, Jerry argued, the
concept of "Lion" could not be complex i.e. composed of cat, claws, teeth etc.

~~~
panglott
I posted a link and very brief summary elsewhere here.
[https://news.ycombinator.com/item?id=11730489](https://news.ycombinator.com/item?id=11730489)
He discusses Fodor on p. 26 as part of a critique of conceptual/a priori
(Hegelian) arguments.

~~~
projectramo
This caused me some indigestion.

I don't consider myself either for or against Fodor's argument but I think the
summary does great injustice to his arguments.

Fodor does claim that, for what he describes as language, systematicity and
compositionality are essential features. However, the "evidence" he cites
isn't from a study. It is primarily from facts about language.

To use one of his favorite examples, take these sentences:

    
    
        The cat ate the rat.
        The rat ate the cat.
    

If you understand sentence 1, you can understand sentence 2, and furthermore,
the words "cat", "ate" and "rat" mean the same thing.

He takes those facts to be uncontroversial, and he says that that is what he
means by systematicity and compositionality.

This "ability comes in clusters" bit is very confusing, and I am not sure what
he means by it.

Fodor doesn't care that connectionists don't have good models for symbolic
manipulation. He says that connectionists models are only good in so far as
they reduce to symbolic manipulation because symbolic manipulation are the
only models we have that demonstrate compositionality and systematicity.

~~~
panglott
> He takes those facts to be uncontroversial, and he says that that is what he
> means by systematicity and compositionality.

Right, but Chemero's point is that that premise is not so empirically
grounded; it is an a priori assumption.

I am not familiar with this literature, but it's ultimately the same point
that he makes against Chomsky's poverty of the stimulus argument (the
literature on which I know much better): that it's not an empirically grounded
premise, and the evidence for such an a priori argument is weak.

~~~
projectramo
Okay, which premise? Can you be specific about the premise that you believe
(that Chemero believes) is not "empirically grounded"?

------
transpy
"The time has come to hit the DELETE key." So, after all, he is using a
computer metaphor? The author proceeds with an priori agenda, rather than
looking at facts and arriving at conclusions. I can't find a spot where he
tries to see if his claims are falseable. His approach is unscientific. To be
fair, his essay contains falseability ideas to test against current ideas of
intelligence as the capacity of entities to consume and apply information.

------
mjgoins
A better title: Your Brain is Not an IBM-Compatible PC with 64k of RAM

~~~
user8341116
You're right. I have 128k of RAM.

------
delinka
My brain is not a silicon-based processing unit with a von Neumann
architecture. My brain, however, calculates, computes, stores, retrieves...

It's a computer more advanced by evolution than we've built through our
understanding of science.

~~~
panglott
That's his whole point. It doesn't store or retrieve.

~~~
delinka
Indeed it does. If not, how then can you, "recall" the names of your family
upon seeing their faces? You've stored a representation of each face, you've
stored their names, and you retrieve that information to maintain the
relationship.

~~~
karmajunkie
pattern recognition. you see a face, and the visual stimulus refires the same
set of neurons, which then fire additional neurons, creating a chain that
eventually fires off the set that is connected with the sound of that face.
This is a pattern of neuron activity, and its not the same as retrieval.

~~~
ef4
You're just describing a different way to physically distribute the bits. From
a mathematical standpoint it's still very clear that information-theoretic
bits are going in and coming back out again.

Or to put it another way, if you really think that isn't computing, you need
to argue that this new piece of hardware also isn't computing:

[https://cloudplatform.googleblog.com/2016/05/Google-
supercha...](https://cloudplatform.googleblog.com/2016/05/Google-supercharges-
machine-learning-tasks-with-custom-chip.html)

~~~
karmajunkie
ah, no, its not just twiddling bits. You can make analogies to that if it
makes it easier to describe a process, but all analogies break down. One may
create a mathematical model of the processes but unless that model mimics and
predicts the biology so closely as to be indistinguishable from the reality,
its still just a model.

Your second assertion is simply wrong. Asserting that what the brain does
isn't the same as what a computer does simply doesn't imply that a computer
designed to mimic some aspects of biological neural nets is not a computer.

~~~
ef4
> twiddling bits

This kind of phrasing implies that you aren't hearing what I'm saying. You're
still picturing physical bits that can be twiddled. I'm not making an analogy
to any physical bits at all, I'm not even making an analogy.

I'm talking about bits in the information-theoretic sense. If I can send a
signal to a person and as a result that person can do better than random at
picking an intended symbol out of a set of symbols, then bits of information
were conveyed.

If the person can do the same thing at a later point in time, then bits of
information were also stored.

No analogies are required, this is literally the definition of "information"
and "bits" since 1948.

~~~
karmajunkie
I'm hearing what you're saying, and if you want to take the broadest possible
definition of a computer, then sure, the human brain is a computer. So is a
protractor. Shadows on the ground could be used to "compute" the integration
of a curve. But its pretty clear that this isn't the issue at hand in the OP.

------
duncancarroll
My comments from yesterday's thread in /r/Neuropsychology:

While I understand the frustration with the whole Kurzweil camp, this article
reads like it was written by an angry high schooler. It also throws the baby
out with the bathwater, in that there are clearly valuable analogies to be
drawn from computing, as others have mentioned in this thread. My two biggest
disagreements:

\- "Your brain does not process information": This statement is too broad. How
do I do mental math?

\- "Memory is not encoded in neurons [as evidenced by a crude drawing of a
dollar bill]": Where then does the crude drawing come from?

~~~
appleflaxen
> Where then does the crude drawing come from

Thank you for making this point. I am completely unable to follow the author's
logic

------
ThomPete
This is just another version of the Chinese Room "argument" and still based on
the same fundamental misunderstanding of what constitutes conscience and
erroneous insistence that there is some sort of magic about humans.

The brain, like the computer are both pattern recognizing feedback loops which
uses input to simulate phenomena.

The computer is still evolving. The human brain much less so.

Humans evolved from animals, why should computers not be able to evolve from
humans.

------
facepalm
Fascinating how somebody can say nothing at all with so many words and give
the impression he has a clue. He doesn't seem to know how computers or brains
work. For example in the baseball catching story, there is no reason why a
computer/robot shouldn't be able to use the same heuristic as a human. ("to
catch the ball, the player simply needs to keep moving in a way that keeps the
ball in a constant visual relationship with respect to home plate and the
surrounding scenery" \- why should that be incompatible with information
processing?)

~~~
panglott
There's he's talking more about paradigms that require or oppose mental
representations that simulate the motion of a ball. The point there is that
it's unnecessary to have an internal mental representation simulating the
motion of the ball in order to catch it.

~~~
facepalm
That's a funny way to claim brains are not like computers then. After all the
usual argument is that computers have no internal representation of anything.

Why would anybody demand that there has to be an internal mental
representation of a ball to catch it?

------
asfarley
I just came here to make sure the article was receiving a suitable amount of
hatred. Everything is as it should be.

------
edtechdev
One argument of embodied cognition and enactivism is that all our concepts
have an embodied basis - even abstract things like math, computation,
language, philosophy, etc.

The article mentioned radical enactivism from Anthony Chemero. Another person
who's written a lot from this perspective is Dan Hutto:
[https://uow.academia.edu/DanielDHutto](https://uow.academia.edu/DanielDHutto)

Here for example is an article describing "Remembering without Stored
Contents"
[https://www.academia.edu/6799100/Remembering_without_Stored_...](https://www.academia.edu/6799100/Remembering_without_Stored_Contents_A_Philosophical_Reflection_on_Memory)

and he has a book: Radicalizing Enactivism: Basic Minds without Content. First
chapter:
[https://www.academia.edu/1163887/Radicalizing_Enactivism_Bas...](https://www.academia.edu/1163887/Radicalizing_Enactivism_Basic_Minds_without_Content)

But for a more general, less radical background on this topic, see also work
by Rafael Nunez and others:
[http://www.cogsci.ucsd.edu/~nunez/web/publications.html](http://www.cogsci.ucsd.edu/~nunez/web/publications.html)

The Embodied Mind: Cognitive Science and Human Experience
[http://www.amazon.com/Embodied-Mind-Cognitive-Science-
Experi...](http://www.amazon.com/Embodied-Mind-Cognitive-Science-
Experience/dp/0262720213)

George Lakoff & Rafael Nunez: Where Mathematics Come From: How The Embodied
Mind Brings Mathematics Into Being [http://www.amazon.com/Where-Mathematics-
Come-Embodied-Brings...](http://www.amazon.com/Where-Mathematics-Come-
Embodied-Brings/dp/0465037712/)

George Lakoff & Mark Johnson: Philosophy in the Flesh
[http://www.amazon.com/Philosophy-Flesh-Embodied-Challenge-
We...](http://www.amazon.com/Philosophy-Flesh-Embodied-Challenge-
Western/dp/0465056741)

Mark Johnson: The Body in the Mind: The Bodily Basis of Meaning, Imagination,
and Reason [http://www.amazon.com/Body-Mind-Bodily-Meaning-
Imagination/d...](http://www.amazon.com/Body-Mind-Bodily-Meaning-
Imagination/dp/0226403181)

And there are now around 30-40 books on this topic, at least.

------
deepnet
The author conflates brains and minds.

Of course brain wetware is not computer hardware, but Minds run on wetware as
computation runs on hardware.

He claims we don't use algorithms to catch balls but then describes the
heuristic we do use algorithmically.

He attacks von Neumann's nascent speculations that the mind can be usefully
modelled digitally but ignores von Neumann's proposal for an alternative type
of computation, indeterminate & probabalistic - from the same book _The
Computer & the Brain_ (1958).

The inability of most people to draw from memory as accurately as from life is
evidence of compatmentalisation not a lack of storage of mental images - as
drawing from memory is a skill that can be aquired.

Then he goes all in by proposing hard AI as fact: "Reasonable premise #1: all
computers are capable of behaving intelligently"

No doubt minds and brains are a deeper mystery than silicon and software but
the author fails to demonstrate why or propose how.

IMHO the Roboticist Rodney Brooks was onto something when he proposed
intelligence must be embodied - AI will come from interacting with the world.

The author cites Chemero's work _Radical Embodied Cognitive Sciencr_ which is
very very well argued and interesting.

------
zby
Electronic computers as we have them now are different in many aspects from
our brains - right. But they are also similar in many ways and surely we can
have some metaphor that would fit both.

What he forgets is that information processor is a metaphor also in respect to
the machines we have on our desks. They don't really process information -
they just route some electrons around. But metaphors are useful.

~~~
panglott
Metaphors are useful, but taking them too literally is like overfitting your
model.

~~~
mannykannot
So true - and your reply applies to about 90% of the posts here.

------
pvillano
I think you're missing some key points

\- Good models are important While the traditional computer model is good at
explaining behavior, it doesn't help us ask new questions. And no matter how
well what we currently know brain fits the traditional computer model, if it
can't be used to inspire new questions then we need to explore new models.
Feynman explains this better [[https://youtu.be/NM-
zWTU7X-k](https://youtu.be/NM-zWTU7X-k)]. Even though our finite non-magic
brains could fit in a computer, a computer isn't the best metaphor.

\- Computers don't act like brains by themselves. In any FUZZY LOGIC compiled
and ran on a computer, all your code is translated into functions and data.
LOSSY COMPRESSION is actually the application of experiments on perception,
and not an inherent property of computers. A jpg is designed to look the same
while using less space, but the information from it is not integrated in the
same way as a human. Our MEMORY is not like a computer's. It is shockingly
constructive, something that isn't expected with the computer model.

\- New programming paradigms might be better models. My beef with the metaphor
is that is has very separate memory and computation. I don't think the brain
"reads and writes from memory". I think it creates and updates a mesh of
functions. Something like FUNCTIONAL PROGRAMMING could be used to model this.
What if instead of looking for a neural hard drive, we treat each neuron as a
function? What if generation of a dollar bill is actually OPTIMIZATION of our
recognition function? What if we are a PIPELINE of functions, each feeding
into the next from retinas to occipital lobe to thalamus to motor cortex to
muscles? I would rather see new ideas coming from us than holding on to an
outdated metaphor.

------
danans
Regardless of the strength of the article's central assertion, it is
interesting to consider how closely coupled our consciousnesses are to the
stimuli we are actively receiving from our environment.

To continue with the IP metaphor that the author decries, it's like we only
have crude wireframes and physics models in our heads, and we project the
stimuli we receive to those models as best we can in realtime in order to
create our conscious experience. Perhaps the sparsity of the "models" our
brains contain is what allows to adapt/reuse them to comprehend such a broad
array of phenomena.

The disorientation that people can experience during sensory deprivation or
even extreme social isolation is perhaps another indication that the coherence
of our conscious experience is so linked to our real-time environmental
stimuli.

This all makes one wonder whether we could make better intelligent agents by
somehow measuring the "coherence" of an agent's experience and turn that into
a positive reinforcement learning signal.

------
panglott
Cited in the article: Radical Embodied Codnitive Science, by Anthony Chemero
[http://uberty.org/wp-content/uploads/2015/03/Anthony-
Chemero...](http://uberty.org/wp-content/uploads/2015/03/Anthony-Chemero-
Radical-Embodied-Cognitive-Science-Bradford-Books-2009.pdf)

~~~
panglott
Some interesting ideas. He points out that cognitive science is young and can
be multi-paradigmatic, and is clear that this is non-mainstream cognitive
science. It is an attempt to explain cognition without involving appeals to
mental gymnastics on mental representations of the world: the world is its own
model.

"The term radical embodied cognition is from Andy Clark, who defines it as
follows: Thesis of Radical Embodied Cognition[:] Structured, symbolic,
representational, and computational views of cognition are mistaken. Embodied
cognition is best studied by means of noncomputational and nonrepresentational
ideas and explanatory schemes, involving, e.g., the tools of Dynamical Systems
theory."

"...antirepresentationalism (which implies anticomputationalism) is the core
of radical embodied cognitive science."

There's a long discussion of Randall Beer's 2003 paper "The Dynamics of Active
Categorical Perception in an Evolved Model Agent" that uses a continuous time,
real-valued neural network (CTRNN).
[https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/...](https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/beer03.pdf)
"Using the model of the CTRNN alone, one can only tell how an instantaneous
input will affect a previously inactive network. But because the network is
recurrent, the effect of any instantaneous input to the network will be
largely determined by the network’s background activity when the input
arrives, and that background activity will be determined by a series of prior
inputs. This model of the CTRNN, in other words, is informative only if one
knows what flow of prior inputs to the neural network typically precedes (and
so determines the typical background activity for) a given input. The impact
of the visual stimulus is determined by prior stimuli and the behavioral
response to those prior stimuli. The model of the CTRNN is useful, that is,
only when combined with the models of the whole coupled system and the
agent–environment dynamics. These three dynamical systems compose a single
tripartite model. ...The models also show that the agent’s "knowledge" does
not reside in its evolved nervous system. The ability to categorize the object
as a circle or a diamond requires temporally extended movement on the part of
the agent, and that movement is driven by the nature and location of the
object as well as the nervous system. To do justice to the knowledge, one must
describe the agent’s brain, body, and environment. Notice that none of these
dynamical models refers to representations in the CTRNN in explaining the
agent’s behavior. The explanation is of what the agent does (and might do),
not of how it represents the world. This variety of explanation—of the agent
acting in the environment and not of the agent as representer—is a common
feature of dynamical modeling..."

------
zekevermillion
The brain is a computer. A rock, or a cloud, or a lake could also be thought
of as computers via computational equivalence. You could say that a biological
brain is a special type of computer. But such mystical language does not help
us understand the brain, artificial brains, or computing in general.

------
olleicua
I feel like the main issue with this article was a failure to adequately
define "computer" and "information processing". There were some good points.
In particular the fact that a given experience will likely cause completely
different changes in the brains of any two individuals is poignant in the face
of Kurzweil's claims. If the claim of the article was "Kurzweil is way to
optimistic" then it would have been much better. Instead it works from an
entirely too specific and poorly defined understanding of what a computer is
to argue a point that is fairly difficult to defend.

------
pjdorrell
This article is like someone went to the "List of Fallacies" page in Wikipedia
and turned it into an essay about how the brain is not an information
processing system.

------
arisAlexis
The brain does not store an actual image of a dollar bill because it stores
only important information such as a basic figure shape etc and that is why it
is very efficient to preserve space. I can't believe this argument is coming
from the former editor of Psychology today. Rest of the article is filled with
bad arguments like "this is silly obviously".

------
philippnagel
As the link is not resolving for me:

[https://webcache.googleusercontent.com/search?q=cache:1UZsMv...](https://webcache.googleusercontent.com/search?q=cache:1UZsMvYNGWgJ:https://aeon.co/essays/your-
brain-does-not-process-information-and-it-is-not-a-
computer+&cd=1&hl=en&ct=clnk&gl=de)

------
panglott
"The information processing (IP) metaphor of human intelligence now dominates
human thinking, both on the street and in the sciences. ...But the IP metaphor
is, after all, just another metaphor – a story we tell to make sense of
something we don’t actually understand. And like all the metaphors that
preceded it, it will certainly be cast aside at some point – either replaced
by another metaphor or, in the end, replaced by actual knowledge. ...the IP
metaphor is ‘sticky’. It encumbers our thinking with language and ideas that
are so powerful we have trouble thinking around them. ...The idea, advanced by
several scientists, that specific memories are somehow stored in individual
neurons is preposterous...no one really has the slightest idea how the brain
changes after we have learned to sing a song or recite a poem. But neither the
song nor the poem has been ‘stored’ in it. The brain has simply changed in an
orderly way that now allows us to sing the song or recite the poem under
certain conditions. ...The mainstream view is that we, like computers, make
sense of the world by performing computations on mental representations of it,
but Chemero and others describe another way of understanding intelligent
behaviour – as a direct interaction between organisms and their world."

RTWT.

------
jcbeard
Ok. For one, we really still don't know a lot about the brain. Secondly,
well...as far as I can tell, the brain is like an computer that adds more and
more traces as it goes. The logic is the circuit, no real analogue to
software. The hardware is the algorithm, and we can add new hardware. There is
storage, but it's fuzzy...with a MTBF that's not well understood. The actions
that we take are based on potentials, just like in a circuit with sense amps
that aren't perfectly tuned, nor is the switch amp binary....there's a lot of
muxing going on (again, that we don't understand). But to say that the brain
isn't like a computer is extremely naive. Cells make up the brain, each cell's
logic can be worked out using well...logic. Putting those cells together into
a complex system produces behavior that is obviously complex, and hard to
understand. Ever tried to understand the behavior of a multi-threaded, multi-
node, multi-socket compute system down to the microsecond? It's complex right?
Yeah, we barely can grasp that, give it a few decades and we'll probably
figure out a bit more about how the brain works...and how to quantify it.

~~~
panglott
BRAINS ARE COMPUTERS is a useful conceptual metaphor for many purposes. But
metaphors are imperfect. It's not literally true that LIFE IS A JOURNEY or
SOCIAL ORGANIZATIONS ARE PLANTS.

------
dc2
In spite of the length of the article, he hardly actually proves anything in
regards to his point or how this can be.

------
larksimian
As others have commented, the author doesn't seem to understand that computers
as Information Processors is also a metaphor. Computers just shuffle electric
signals around according to the properties of their circuits. Software is just
an abstract way of referring to electricity flowing in a certain way inside
the computer.

My knowledge of how computers work is fairly shallow and my knowledge of
neuroscience is basically none. With that caveat: this answer
[https://www.quora.com/Is-the-human-brain-analog-or-
digital](https://www.quora.com/Is-the-human-brain-analog-or-digital) best fits
my biases/intuitions. TLDR: The brain has a digital aspect -- neurons fire and
send a signal or they do not -- and an analog aspect -- whether or not a
neuron fires depends on the chemical(?) conditions in which it's embedded and
inside the cell.

This makes sense to me. That being said, I'm not sure whether I actually
disagree with the author. If anyone is trying to find the brain's equivalent
of a CPU, that's a fool's errand. It also seems likely that digitizing a brain
is not possible, certainly not feasible. Digitizing analog systems is
always(?) lossy -- the lossiness just doesn't matter with things like music
due to the limitations in human hearing.

This doesn't mean we can't create artificial brains, it just means that they
can't be digital machines.

My personal, bad, metaphor for memories is that they work somewhat like linked
lists. It's easy enough to recite a poem start to finish. Harder to recite it
if you start in the middle. Really hard to recite it backwards. Is this true?
Who knows. Does it tell us anything about the underlying mechanisms? Not
really.

------
scalio
Judging from the comments, there's a massive misunderstanding going on between
the readers, the author, and everyone's understanding of ourselves.

There are certainly odd things in this article. What I think the author wants
to get at:

We don't store specific information like "on a dollar bill, there's a line
going from x of the left side to y of the bottom side with a curvature of z"
in a specific container labeled "info about dollar bills", to be read out
everytime you interact with one.

Instead, each time we pay using such a bill, the chain of neurons firing from
the moment you open your purse to the moment you get your receipt, as a whole,
is responsible for your internal representaion of a dollar bill: (1) first
glimpse of sheets of paper in purse (2) fumble indivual sheets to examine more
closely (3) recognize key elements like green colour, rectangular, famous bust
in the center (4) in case you didn't, now you know you've got cash in there
(5) remember the total, do some arithmetic to figure out which bills to hand
over (6) extract bills from purse (7) extend hand to hand cash over to cashier
(8) wait for receipt/pack things (9) get receipt (10) purchase complete.

This entire chain (which is obviously incomplete for the sake of
demonstration), with all the completely unrelated stuff about rummaging in a
purse or human interaction, comes together to form what you think of as a
dollar bill. This is how we store information, by relating it to stuff we
already know.

Which is why it makes sense that babies come into this world with only the
bare minimum. We don't pop out ready made, that's the whole point of childhood
and adolescence. We build ourselves, unconsciously incorporating our
surroundings into our personality, by constantly relating to what we know,
comparing new things to old things. This implies that a faulty impression
early on has disastrous consequences in whatever is built afterwards, meaning
most of it.

(edit) just came up with a nice formulation: the brain doesn't store raw data
as in pixels or decimal values, it stores characteristics and rebuilds the
thing you're thinking about as well as your medium of expression allows it to
(which is why you can always think about something, but when asked to draw or
describe it, it just seems impossible). This explanation is far from perfect,
and I don't understand how our notions of numbers for example comes to be, but
seeing as maths is really just a load of charateristics giving intersting
results when combined, I see no problem.

------
Vaenae
The map is not the territory. But that doesn't mean that maps can't be useful.

------
jackson1372
I'm a PhD student who studies this stuff. This article is, quite simply,
poorly argued.

Most importantly, there's nothing in here that directly argues against the
information processing metaphor. The author gives a bad argument that he
thinks lies behind the metaphor and shows why that argument is bad. But a) I
don't recognize this argument as what motivates the metaphor and b) to show
that one argument for a conclusion fails is not to show that the conclusion is
false.

The author often points to differences between us and actual computers. No one
who employs the information processing metaphor is claiming that we are
_identical_ to human-made computers. Rather, the claim is that there's some
abstract sense of 'information processing' under which both human brains and
computers are implementations. In order for both of those things to be
implementations, they need not do so with the exact same kinds of mechanisms,
nor must they have the exact same kinds of abilities. (This is why I find the
discussion of the dollar drawing case so strange - no one is claiming that we
store visual representations of objects in the way that an actual human-made
computer does.)

When the author writes:

"To catch the ball, the player simply needs to keep moving in a way that keeps
the ball in a constant visual relationship with respect to home plate and the
surrounding scenery (technically, in a ‘linear optical trajectory’). This
might sound complicated, but it is actually incredibly simple, and completely
free of computations, representations and algorithms."

What he describes is not a non-algorithm. Insofar as it's a procedure that
takes in certain inputs and follows a series of steps in order to achieve some
goal - it just _is_ an algorithm, albeit a somewhat simple one.

More importantly, the author seems to misunderstand what is meant when
researchers say that the brain 'runs' an algorithm. Of course _you_ don't
consciously crunch numbers when your visual system is chugging away, trying to
analyze and sort through the...data...it's getting from the retina. But the
lack of _conscious_ computation is not evidence for the complete lack of
computation.

There are all sorts of legitimate criticisms of cognitive science - and the
author discusses some of these at the end of the piece without really
understanding/explaining what's going on behind them. But the author says
nothing convincing that it's the failure of the information processing
metaphor that explains the limitations/failures of cognitive science.

~~~
panglott
In fact several people in this thread have explicitly compared human visual
memory to lossy compression of images in a computer system. That's his main
point, about the metaphor leading people astray.

~~~
jackson1372
Understood in the right way, there's nothing wring with that comparison.

Certainly, people misuse/misunderstand the information-processing
metaphor/claim. (Though the people who do this tend not to be cognitjve
scientists.) But that's not to say that the metaphor is complete garbage.

~~~
mannykannot
The vast majority of the posts here are having trouble loking beyond it. That
suggests to me that the metaphor is becoming more problematic than useful, at
least among people who understand digital technology but who are not cognitive
scientists.

------
pttrsmrt
Provocatively naive - does anyone nowadays really think the brain Works like a
computer? A much more interesting focus is on non-representationalistic
understandings of the world seen in the discussion.

~~~
UK-AL
"Provocatively naive - does anyone nowadays really think the brain Works like
a computer? " It's the most widely accepted theory.

It's just not computer in the way you think a computer works.

------
nickpsecurity
Oh, this is a treat. Author claims to demolish "Brain is a Computer" as
historical ignorance by comparing digital computers and brain. Instead, author
shows own ignorance of history of computing by ignoring the relevant branch:
analog computers. Most ignore them although they're _critical_ here. So,
here's a summary of how analog computers work:

[http://oro.open.ac.uk/5795/1/bletchley_paper.pdf](http://oro.open.ac.uk/5795/1/bletchley_paper.pdf)

These types of computers implement mathematical functions that process signals
from the real-world in their native form in real-time. Each component
implements primitive functions that act on electrical input. They're usually
arranged in connected components that flow from one thing to another or have
feedback loops. They have no memory but can emulate it by generating on the
fly. Doesn't that sound awfully familiar? ;)

I've been saying for _years_ that brain is not digital and is a computer. Past
year or two learning hardware just solidifies it more. Just look at it's
properties to instantly see analog effects:

[http://theness.com/neurologicablog/index.php/is-the-brain-
an...](http://theness.com/neurologicablog/index.php/is-the-brain-analog-or-
digital/)

There's models for general-purpose, analog computers with prototypes built,
analog models of brain functions, and analog implementations of neural
networks. All show that the brain might actually be a general-purpose, analog
(or mixed) computer. Moreover, it seems to be a self-modifying, analog machine
with some emergent properties starting at childhood in its early phase. The
complexity of this thing and re-creating any one brain's function would be
mindboggling as author correctly noted.

So, the brain is not a digital computer. It's an analog or mixed-signal
computer with massive redundancy and ability to reconfigure itself. The
descriptions of inconsistencies with digital match up nicely to model
approximations in an analog style. Those issues even contributed to switch to
digital for better accuracy/precision. Now, they're going to have to switch
back if they want to match the greatest computer ever made. :)

I'll leave you with examples of analog and neural.

[http://www.artificialbrains.com/brainscales](http://www.artificialbrains.com/brainscales)

[https://pdfs.semanticscholar.org/6e84/1782fec1f1f46629ad965b...](https://pdfs.semanticscholar.org/6e84/1782fec1f1f46629ad965b94a8891215a3bb.pdf)

Note: The above, my favorite as geometrically closer to brains, is a 60
million synapse system that took _294,912 cores_ to simulate. That's what a
handful of analog chips are doing in real-time. Behold the power! :)

[http://research.cs.queensu.ca/home/akl/cisc879/papers/PAPERS...](http://research.cs.queensu.ca/home/akl/cisc879/papers/PAPERS_FROM_MINDS_AND_MACHINES/VOLUME_13_NO_1/J7L1675237505M16.pdf)

[http://binds.cs.umass.edu/anna_cp.html](http://binds.cs.umass.edu/anna_cp.html)

Note: Siegelmann writes on non-Turing models for computation focusing on
analog and neural. Her lab is all over the theoretical aspects of this stuff.

[http://moon.cc.huji.ac.il/oron-
shagrir/papers/Brains_as_Anal...](http://moon.cc.huji.ac.il/oron-
shagrir/papers/Brains_as_Analog-Model_Computers.pdf)

Note: Shows how many of these things are mathematical relationships. Other
references exploit that given it's what analog implements.

[http://www.eetimes.com/document.asp?doc_id=1329591](http://www.eetimes.com/document.asp?doc_id=1329591)

Note: A few were interested in simulating actual brain structures with
memristors. Above is the latest take on that in Russia.

So, there you all go. It's a computer. It's an analog computer. Also, ends the
parallel debates about whether you can have a general-purpose, analog computer
or whether it can top digital. A few, analog computers working as a team...
invented... digital computers. QED. :)

------
slantaclaus
What is a computer?

------
beardog
just because our brain is not at all like our current 'computers' that we have
created architecturally speaking does not mean it is not a computer. Our
brains calculate and store and retrieve information, just not to the near-
perfect extent that our 'normal' computers do.

------
avs733
among all the other well articulated complaints in the comments here I have to
ask a question...

is the result of this article the conclusion that computers are just boring,
uncreative, overly logical brains? Who woulda thunk?

------
ayylmao9077
This guy really doesn't seem to know what he's talking about.

------
Vanit
Humans are Turing complete. What other definition do you need?

------
naringas
But it can certainly compute.

------
stcredzero
tl;dr -- This article is a barefaced example of rank ignorance. Author/editors
have no idea what "information" and "algorithm" means. Instead, they seem to
be dealing with those concepts as ignorant rubes.

 _A baby’s vision is blurry, but it pays special attention to faces_

The latter phrase reflects a well established and important developmental
fact. I'm very curious about the first phrase. Is this also an established
fact in the science of human development? On the face of it, it seems like an
extraordinary piece of research if that's true. One cannot, of course, simply
_ask_ a baby if their vision is blurry. Their visual acuity needs to be
inferred, and some cleverness would be required to do this reliably and
accurately.

Anyone know which study this refers to? (Or is this an example of why this
article is pretty darn fluffy?)

EDIT: Arrgh! This article is ignorant fluff!

 _But here is what we are not born with: information, data, rules, software,
knowledge, lexicons, representations, algorithms, programs, models, memories,
images, processors, subroutines, encoders, decoders, symbols, or buffers –
design elements that allow digital computers to behave somewhat intelligently.
Not only are we not born with such things, we also don’t develop them – ever._

The brain can contain information. That's simply an evident fact. Does the
author understand what information means?

The human nervous system also embodies algorithms. That's also a fact
established by neuro-biology. The visual system is full of algorithms. I think
the author doesn't understand what these words mean. All of the other words
seem to be thrown in there to evoke images of machine parts that don't look
like people parts. It's exactly the cargo-cult misunderstanding of the
underlying principles, in mistaken deference to resemblances.

Also, anyone who's delved into Rubik's Cube solving knows that we humans can
develop and embody algorithms. (The scene plays fast and loose with the term,
but they do use algorithms in a mathematical sense.) This author loses a whole
lot of academic credibility for writing this, and the editors of this website
do as well for publishing this. This is just rank ignorance on the level of
publishing a perpetual motion machine article. It's 2016. If you are
publishing a factual article of interest to intellectuals, and you don't
understand what information or an algorithm is, you have to more business
writing, editing, and publishing such an article than if you had no idea of
what the periodic table means.

 _Misleading headlines notwithstanding, no one really has the slightest idea
how the brain changes after we have learned to sing a song or recite a poem.
But neither the song nor the poem has been ‘stored’ in it. The brain has
simply changed in an orderly way that now allows us to sing the song or recite
the poem under certain conditions. When called on to perform, neither the song
nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more
than my finger movements are ‘retrieved’ when I tap my finger on my desk. We
simply sing or recite – no retrieval necessary._

As a traditional musician who can play several hundred melodies from memory, I
can say this is complete BS. He's simply stupid playing games with
terminology. What I know from my experience and that of many fellow musicians,
is that we _do have to retrieve_ melodies. In fact, sometimes we have to take
a few minutes to make sure we have all parts of the melody retrieved. You can
go out to pubs in SF and see traditional musicians sit there and do this.
Sometimes the right cue needs to occur for the complete retrieval to happen.
Some musicians have certain melodies they can only remember after they
remember the name.

------
user8341116
My brain computes things. So by definition, it is a computer. QED

------
thinkMOAR
Hmm probably a brain that wrote this, to confuse us.

------
jbandela1
Google cache for those who can't connect to the paper
[https://webcache.googleusercontent.com/search?q=cache:https:...](https://webcache.googleusercontent.com/search?q=cache:https://aeon.co/essays/your-
brain-does-not-process-information-and-it-is-not-a-
computer+&cd=1&hl=en&ct=clnk&gl=us)

TLDR Response:

After taking a look at the paper, take a look at this post by the Google deep
learning team. Take a look especially at the images.

[http://googleresearch.blogspot.co.uk/2015/06/inceptionism-
go...](http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-
into-neural.html)

I think most if not all of the author's arguments that the brain is not a
computer, can also be applied to Google's deep learning (which obviously runs
on computers).

Long response:

I think the author is very naive about how broad a computer is and what a
computer can do. Deep learning is obviously running on a computer, but like
the author states for the brain, any single image or fact is probably not
encoded in a single place.

In addition, we have probabilistic data structures, for example a bloom
filter. Would a computer running a bloom filter to recognize data it had seen
before not be a computer because it did not store an complete representation
of any one item?

What we are seeing now, is that as computers go beyond just storing and
retrieving data, to actually doing stuff like speech/image/pattern
recognition, the representation is becoming more and more like the brain
representation.

In addition, the author ignores that the brain did indeed develop a way encode
information exactly. If the author would like more information about this, I
suggest they look up "drawing" and "writing" in wikipedia. With drawing and
especially with writing, the brain developed a way to encode information
exactly that could later be retrieved. In addition, it could be retrieved by
other brains (if they know the language).

Also, the author proposes straw man arguments for the other side. I am
somewhat familiar with both computers and brains ( I trained in Neurosurgery,
but work now as a programmer), and I never met a professional who claimed that
memories are stored in individual neurons! I was always taught that memories
are encoded in the connections between neurons - which is very similar to how
neural networks encode information ( as the strength of connections between
elements of the neural network ).

It is also interesting that he quotes Kandel. When I actually read Principles
of Neural Science (by Kandel and Schwartz - [http://www.amazon.com/Principles-
Neural-Science-Eric-Kandel/...](http://www.amazon.com/Principles-Neural-
Science-Eric-Kandel/dp/0838577016)), I got the distinct impression that Kandel
actually viewed the brain as an information processor, with lower level
information being processed into higher level representations (see for example
the chapter on the visual system).

So overall, the author takes a very limited view of both computers and brain
and concludes (I think falsely) that the brain is not a computer.

~~~
mannykannot
I do not think that is his conclusion, though he is certainly at fault for
giving that impression. I think he is actually saying that it is a misleading
to compare a brain too closely to a digital computer.

It is trivially true that the brain processes information - that is not much
of a conclusion, it is a starting point, and it certainly does not mean that
it is like a digital computer.

------
aluhut
I've been going through my pocket stack today and came along this article:

[http://www.nytimes.com/2015/06/28/opinion/sunday/face-it-
you...](http://www.nytimes.com/2015/06/28/opinion/sunday/face-it-your-brain-
is-a-computer.html)

> "Face It, Your Brain Is a Computer"

I think it fits as a counterpoint.

~~~
panglott
Interestingly, the author of that piece is the editor of "The Future of the
Brain", a selection of which is cited in the original article.

------
SubiculumCode
This article is poorly thought out clickbait. Few cognitive scientists think
about cognition the way that is being suggested, anf the suggested replacement
is next to useless.

------
tgflynn
I think that the brain does something beyond or in addition to information
processing because I don't see how you can get qualia or conscious experiences
out of information processing alone. However to claim that information
processing isn't at least an aspect of what the brain does is absurd. If you
agree that sorting a list constitutes information processing then since a
person equipped with pencil and paper (or without if the example is short
enough) can sort a list it's clear that the person's brain has processed
information.

I find it ironic that one of the things the author claims humans don't have is
memories. Memory is a concept that arose to describe an aspect of human
experience thousands of years before computers existed. Using the term
"memory" to describe information storage in computers is indeed an analogy but
it's an analogy in the opposite direction from what the author claims.

