
Your brain does not process information and it is not a computer - plainOldText
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
======
deathanatos
> _What is the problem? Don’t we have a ‘representation’ of the dollar bill
> ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it
> and use it to make our drawing?_

We can have a representation of a dollar bill, and you can see that the
student retrieved that representation: _he drew a picture_.

Sure, the representation is not exact, and some details have been omitted, but
this is true of all representations, even computerized ones. Uber does not
store an exact representation of every car; they cannot tell me that there is
a dent on the back side of the car picking me up. Yet, my phone relays that a
"silver Honda with plates XXXX" is picking me up — a representation of the
car, and _one good enough to get the job done_.

> _no image of the dollar bill has in any sense been ‘stored’ in Jinny’s
> brain._

And how not? The student was, again, able to draw a dollar bill. It wasn't
perfect, but neither is a JPEG photo.

> _McBeath and his colleagues gave a simpler account: to catch the ball, the
> player simply needs to keep moving in a way that keeps the ball in a
> constant visual relationship with respect to home plate and the surrounding
> scenery (technically, in a ‘linear optical trajectory’)._

Perhaps this is how stuff functions, approximately, but it doesn't mean that
it is the _only_ means by which the brain could accomplish the task. I can
throw and catch objects without looking at them — without a "constant visual
relationship with respect to […] the surrounding scenery" — so, how can I do
this?

------
blacksqr
...states the author without evidence.

The author seems unfamiliar with the concept of levels of abstraction, and
concludes that information presented in a way other than its rawest, most un-
abstracted form isn't really information.

~~~
gt_
Yes, and what a shame. Their hindrance is at least exemplified quite clearly.
There could be hope.

------
placebo
This reminds me of a great snippet from one of Raymond Smulyan's books ("This
Book Needs No Title") that I think best captures the implied and unfounded
fear behind these sort of articles. Here's the quote in full:

 _“Recently I was with a group of mathematicians and philosophers. One
philosopher asked me whether I believed man was a machine. I replied, ‘Do you
really think it makes any difference?’ He most earnestly replied, ‘Of course!
To me it is the most important question in philosophy.’ I had the following
afterthoughts: I imagine that if my friend had finally come to the conclusion
that he were a machine, he would be infinitely crestfallen. I think he would
think: ‘My God! How horrible! I am only a machine!’ But if I should find out I
were a machine, my attitude would be totally different. I would say: ‘How
amazing! I never before realized that machines could be so marvelous!’”_

Adding my own thoughts to this, I think what generates these sort of articles
is an intuitive and well justified resistance to the belief is "life is just a
complex machine" but by using completely mistaken arguments.

The essence of life, existence and consciousness can never be reduced to any
mechanical or even logical processes (although some would like to believe
otherwise), but the fact that it can bloom into amazing creations like the
human mind or possibly in the distant future ti some human-made artificial
mind doesn't detract from it's magic.

------
jnmd
This is one of the worst straw man arguments I have ever seen. The author
makes broad generalizations with no attempt at understanding, and then goes on
to make completely ridiculous claims. Honestly made me sick to my stomach to
read.

~~~
s3nnyy
Can you specify? I thought the article was rather good because it resembled
mistakes made by geeks for centuries: Thinking of people like machines.

~~~
sn9
The author built a strawman by assuming people who use the metaphor of
computation literally mean that the brain has a von Neumann architecture [0]
or something similar.

His definition of information is not the information theoretic definition, but
one that is narrowly confined to his strawman.

It's just an embarrassingly bad attempt at refutation. I can't imagine how
someone of his stature could write something like this when he should be
familiar with research like our understanding of how the visual and auditory
systems work (literally processing information).

[0]
[https://en.wikipedia.org/wiki/Von_Neumann_architecture](https://en.wikipedia.org/wiki/Von_Neumann_architecture)

~~~
dilemma
People in technology use the machine metaphor for the human mind literally.
See neural networks and even basic terms like Artificial Intelligence (there
are no and have never been any intelligent machine or program). This certainly
is not a strawman.

~~~
sn9
Please point me to a technologist who believes the brain to be a von Neumann
architecture.

The existence of misunderstandings by lay outsiders does not, in itself,
invalidate the use of the metaphor, especially as it's used within the actual
research community.

------
cr0sh
This essay seems to ignore the concept of "neural networks" (NN), both
biological and artificial.

Now, that doesn't mean that our current artificial representations of neural
networks are anywhere close to how biological neural networks work (for
instance, as far as I know, we have yet to find anything like
"backpropagation" in biological NNs - but if anyone knows differently, I'd be
most interested in learning).

But artificial NNs are the closest we've gotten to something that seems to
work how brains (biological NNs) work, and how they are structured. We have a
multitude (and seemingly ever growing) of models, but there is something
lacking - something almost seemingly fundamental.

For instance, we have all of these different models, but none of them are
"general" in the sense that there is a single one that can be used for
different tasks. Not only that, but there are also a multitude of means for
simulating the stimulus/activity/firing of the neurons in the network; for a
while the "activation function" was a sigmoid filter, but today best results
are found with RELU - then there's the whole concept of "spiking" networks
which don't work like such (they seem somewhat "time dependent" from my
understanding). There are tons more where those came from.

Now - in this field I don't have much knowledge; I'm no where near an expert,
and I am certain it shows. At the same time, based on what I do know, I have
this feeling in the back of my head that while what we have built does seem to
work ok on certain classes of problems, it isn't the answer.

I think what we've built is overly complex; I think there is an answer out
there much simpler. It may be something that needs new hardware. I don't think
it will involve GPUs or any other linear algebra methodology (hardware or
software). It might likely be one of those "forehead slappers" when we do
discover it, whatever it may be.

It's a nagging feeling I have - maybe completely wrong. It just feels like we
are in a certain manner trying to build an airplane that flaps its wings to
fly, when the answer is ultimately much simpler. Maybe my feelings are false.

------
yourkin
Interesting perspective that denies the whole field of cognitive psychology
(despite the author being a renowned psychologist himself, albeit in the field
of behavioral psychology). But I wonder, if information is going in and
information going out, what is it in between if not information processing,
aka computing?

~~~
User23
Information is nonsense. In our models perfect information and perfect noise
are indistinguishable. Usually this is just hand-waved away, but we all know
where you go when you start with an incoherent definition –anywhere your
imagination can carry you.

One of the major philosophical errors technically educated but philosophically
ignorant intelligent persons make is not being aware of the role of the
interpretant. Computing "neural networks" have biological neural networks as a
necessary precondition. CS Peirce's understanding of semiotic really should be
required reading.

Charitably I'd call the tendency to say we are computers (that is to say,
we've built machines in our own image) the pathetic fallacy. Less charitably
I'd call it idolatry. The dream of building gods is ancient and persistent.

To answer your question, what's going on in the human mind remains a mystery
to all honest inquirers.

~~~
blacksqr
"Information is nonsense" states the person communicating by computer.

------
zshrdlu
> _What is the problem? Don’t we have a ‘representation’ of the dollar bill
> ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it
> and use it to make our drawing?_

This article makes some silly points. Why didn't the student draw a minotaur,
or some other thing, when they were asked to draw a dollar bill? There is a
clear correspondence between the drawing and the actual dollar bill.

------
trapperkeeper74
What seems the most likely is that, effectively, memory is like very lossy,
probabilistic compression spread with redundancy across a billion analog
storage nodes, similar in some ways to a Bloom filter. It’s not binary though,
it’s analog. After all, there’s no value in perfectly “unlearning” or
destroying a specific memory, just “free” it up by gradual deemphasis of its
parts and perhaps it may effectively go away.

Also, the interplay between short-term and long-term memory formation in order
to make some approximation of the short-term memory (also an approximation)
somewhat permanent.

Granted, recall is imperfect and different every each time.

~~~
qplex
>Granted, recall is imperfect and different every each time.

Still, one can memorize and recall some things with almost perfect accuracy
(at least over time), for example phone numbers.

------
qplex
What a disgenius article.

The human brain is not a digital computer, but it does process information and
store memories.

------
stefanwlb
I enjoyed the article, and always thought it was strange that the people think
there is any resemblance between the brain (something beyond comprehension)
and some simple man-made tool following basic 1+1=2. I think people are unable
to come to terms with their ignorance , indeed, eternal ignorance.

~~~
apk-d
There is plenty of resemblance: the human brain streams data from input
devices, transforms the data and produces outputs (in humans, mostly release
of chemicals, muscle-flexing and tongue-flapping... oh wait, that's muscle-
flexing too).

This is much more obvious in our much simpler-minded cousins, such as insects:
turn on a gas lamp and watch those suckers fly straight towards their doom.
Basically, they're little more than clever little automatons designed by
nature.

If you are of the opinion that behaviors like this don't occur in the oh-so-
complex-beyond-all-understanding humans, I remind you of the millions of
Americans suffering from disease and early death because there's _too much
food around the place_. Or maybe I'm ignorant and the reason for epidemic
obesity shall eternally remain beyond comprehension, I don't know.

