
Is Neuroscience a Bigger Threat than Artificial Intelligence? - acsillag
https://www.3ammagazine.com/3am/is-neuroscience-a-bigger-threat-than-artificial-intelligence/
======
Nevermark
The article's logic is a soup sandwich of two huge misunderstandings:

First, "Theory of Mind" is something experienced at a high level directly. We
do reason about our own and other's actions, (both successfully and
unsuccessfully of course). We still have much to learn about how this works
and is implemented, but I don't think it is controversial that each of us do
develop theories about our own minds and others and use that along with other
information.

Second, "Theory of Mind" does not mean "a scientific theory of the mind" it
means that each of us develops internal theories of our own and other's minds.
In one case "Theory" means something in our heads, in the other "theory" means
a scientific body of knowledge.

This seems to have confused the author greatly.

So "Theory of Mind" has nothing to say about how neural circuits record
information about locations in a maze and whether that recorded information
can be considered "meaning" or not.

"Of course you could argue that what Nobel Prize winning research shows about
rats is irrelevant to humans."

Wow! This really underlines that second confusion. Of course rat's brains can
tell us something about <<theories of minds>>, but since we already know rat's
don't model each other's internal actions with the detail we do, it would be
problematic to view them as reliable sources for investigating the particular
theory of human minds called "Theory of Mind".

Sigh.

~~~
marmaduke
Rats brains are used to study mechanisms presumed to be conserved by humans eg
aspects of molecular biology. That doesn’t mean that rat social behavior
couldn’t be used to model some aspect of human social behavior.

~~~
Nevermark
No doubt rat behavior is worth using to model a lot of human behavior and
brain organization, but the article talks specifically about Theory of Mind
which is something few animals show strong signs of.

Some aspects of our species are different from most other animals, this seems
to be one of them. Maybe dolphins or whales have comparable Theory of Mind
capabilities?

------
conjectures
From a skim read this article is pretty sloppy. It argues that the human OEM
theory of mind is in some way unique or unreproducible. This conclusion is
completely unsupported by the text, which is just a sophisticated word salad.

~~~
nipponese
Well you gotta love and respect the site's mission statement: _Whatever it is,
we 're against it_

~~~
conjectures
Lol yeah. Including themselves, I guess.

------
djokkataja
> What does all this mean? Watson may beat us at Jeopardy, but we are
> convinced we have something AI will always lack: We are agents in the world,
> whose decisions, choices, actions are made meaningful by the content of the
> belief/desire pairings that bring them about. But what if the theory of mind
> that underwrites our distinctiveness is build on sand, is just another
> useful illusion foisted upon us by the Darwinian processes that got us here?
> Then it will turn out that neuroscience is a far greater threat to human
> distinctiveness than AI will ever be.

I'm not sure how someone could believe that AI would always lack these things
without also believing in some form of essentialism, basically believing that
major parts of the human experience are "magic" somehow rather than derived
from physical processes.

Anyway the author's major concern seems to be that neuroscience has
demonstrated that desires and beliefs are not directly real in the same way
that, say, your brain is a particular collection of atoms restricted to the
physical confines of your skull. I suppose this could be very disturbing if
you have the sense that your life is significantly more meaningful if all of
your desires and beliefs are as real as the ground you walk on rather than
being some illusory projection of the brain.

The problem actually arises from the associations that we have with the word
"illusion". As soon as we start to think of our beliefs and desires as
"illusions", it's natural to leap to terms like "fake", "false", "worthless",
"deceit", and so on. In other words, there are all kinds of negative
connotations that we associate with the word "illusion", so when we use that
to describe core parts of what we think of as the human experience, it makes
us feel bad.

But this is simply a limit of our language: what _are_ the illusions of
"beliefs" and "desires" anyway? Desires correspond to our experiences; if we
want food, that corresponds to a very real physical state of our physical
bodies. Beliefs are a lot shakier because they tend to be be built on complex
layers of ideas, but they correspond to physical reality as well, whether or
not they're particularly accurate. So these things are very useful and
important despite being "illusions". And there's a huge difference between
these "illusions" and direct falsehoods. So I think the real limitation here
is that we don't have language that's particularly well adapted to discussing
and thinking about these things.

~~~
evdev
I think it largely comes from uninterrogated/unrevised dualisms applied to the
scientistic world. I think of Rosenberg's eliminativism as follows:

a) Platonic realism is the only way our semantic categories/abstractions could
be meaningful.

b) The world is physical (platonism is not true).

c) All our categories are meaningless! Our abstractions are unreal!

You can see it all follows from a), and there are serious questions about
whether this is actually a physicalist philosophy...

~~~
joe_the_user
If instead of requiring various concepts of philosophy to be real, one simply
requires them to be _irreducible_ \- not explanable as an ensemble effect of a
larger system - then one winds-up with nearly the same thing. And I don't know
how this requirement of irreducibility isn't a part of many philosophical
views of the mind - specifically, isn't that what's important about "qualia",
that they are irreducible?

~~~
evdev
Why would you require this? Concepts can either be a part meaningfully
differentiated from the whole (neocortex, hypothalamus -> the whole brain) or
an aggregate that while reducible to more basic parts is meaningfully
differentiated from other similarly reduced aggregates (neocortex vs.
hypothalamus). What seems to define the demand for the "real" or "irreducible"
is wanting the nature of reality to in some sense "do" this differentiation
"for us".

And yes I would say that "qualia" insists on dualism.

~~~
Nevermark
I agree but would fine tune your last statement to:

"Assuming the experience of "qualia" is irreducible is to insist on dualism."

The Qualia does exist. Unfortunately, no one can be told what the The Qualia
is. You have to see it for yourself.

------
zitterbewegung
The article seems to take the position that Intelligent life on earth is zero
sum and there is no victor. I'm not sure that is the case.

Then the rest of the article details the concept of theory of mind in
philosophy and how that is not congruent with how neuroscience understands the
brain. I don't see where the conflict is here unless you want to believe in
the theory of mind.

~~~
monocasa
So far, intelligence on Earth has been zero sum. We've wiped out multiple
other intelligent species until we were the only ones.

~~~
Nasrudith
I think that was more a result of shared environmental niche more than
anything since it happens with non-intelligent species as well. Predators
fight over same prey niche even in the same species. Stable multipredator
shared prey arrangements have a clear power differential - the stronger one
can chase the weaker ones away from kills and if they are slow enough that may
be their main strategy.

If there were say merpeople from a similar time table it would take a long
time for conflicts to start because they have a totally different
environmental niche.

Anyway early bands of hominids were nomadic by necessity due to localized
depletion of food supplies. Given that scarcity it is no wonder that they
would come into conflict and managed to drive megafauna to extinction. And
their numbers weren't exactly high - humanity bears genetic scars of near
extinction.

Said scarcity is such a thing of the past that we have food distribution
problems instead of production ones.

Granted there is also a long history of zero sum thinking holding humanity
back.

~~~
monocasa
Wouldn't intelligences engineered to be a component of our society, by
definition, be sharing an environmental niche with us. Going back to your
merpeople analogy, we wouldn't put in the effort to build AI, just to drop
them in the ocean and have them live their lives separately from us and
providing no use to us unless they become so economical that they're just
literally everywhere on Earth.

Edit: and going back to you merpeople example, the onset of humans represents
a mass extinction level event for the closest thing to merpeople: ceteceans.
[http://www.iucn-csg.org/index.php/status-of-the-worlds-
cetac...](http://www.iucn-csg.org/index.php/status-of-the-worlds-cetaceans/)

------
zeroname
_" Besides its economic threat, the advance of AI seems to pose a cultural
threat: if physical systems can do what we do without thought to give meaning
to their achievements, the conscious human mind will be displaced from its
unique role in the universe as a creative, responsible, rational agent."_

That's a bit of an aggrandizing characterization for the gang of monkeys that
climbed down the trees a couple of ice ages ago, that almost nuked itself over
a disagreement on the best way of distributing bananas, roughly speaking.

 _" We are agents in the world, whose decisions, choices, actions are made
meaningful by the content of the belief/desire pairings that bring them about.
But what if the theory of mind that underwrites our distinctiveness is build
on sand, is just another useful illusion foisted upon us by the Darwinian
processes that got us here? Then it will turn out that neuroscience is a far
greater threat to human distinctiveness than AI will ever be."_

Oh my, a threat to our _distinctiveness_! Next thing those scientists will
tell us that the earth is not the center of the universe, that the stars in
the sky aren't fixed and that indeed Santa Claus himself is not a real person
running a sweatshop at the north pole.

~~~
Nevermark
OMG, your last paragraph brought tears to my eyes. Did your parents not tell
you? (Can someone else, ... I cannot do it!)

Seriously though, human's have been replaced every few years since we showed
up. As long as AI's don't annihilate each other I think we should just be
proud parents and stop worrying that they are going to grow up and want the
car keys.

------
FlowNote
I posted about fusing Neanderthal brains to spiking neural networks here once.
A HN poster threatened to get the Pope to stop me.

[https://news.ycombinator.com/item?id=18425194](https://news.ycombinator.com/item?id=18425194)

You have no idea how much more powerful neurology is over machine learning.

~~~
taneq
As we start to understand neurology sufficiently to simulate it, the two
fields are converging. Eventually (soon?) they'll be seen as sub-fields of the
study of intelligence.

(As for your fusing neural organoids to computers, there was a sci-fi story I
read years ago that used this theme. There were human-level AIs that needed
some integrated organic grey matter to perform some of their higher level
functions. Wish I could remember what it was - although that description
probably covers half a dozen stories by now.)

------
SubiculumCode
"Alex Rosenberg is the R. Taylor Cole Professor of Philosophy and chair of the
philosophy department at Duke University."

Not a neuroscientist. Not a cognitive psychologist. He gives short shrift to
the utility of different levels of description to inform each other.

~~~
n4r9
Are we to dismiss philosophers of science for talking about the philosophical
implications of science?

~~~
SubiculumCode
No. But does he understand the neuroscience?

------
xazo
A more informative title would be "Is Neuroscience a Bigger Threat to Human
Distinctiveness than AI?"

~~~
n4r9
The title was very misleading, especially since AI's "threat" usually means a
paperclip maximiser or something. I ended up skimming through the whole
article to try and figure out what was being threatened. Until the very end I
assumed it was our ability to function, knowing how meaningless consciousness
is.

~~~
bloke_zero
I think the threat implied would be that if all of our beliefs are essentially
meaningless then nihilism might appear to be the logical outcome.

That is largely what I believe (about beliefs) - they are cute but
meaningless.

~~~
hprotagonist
well that's cute... but meaningless :)

------
carapace
Don't mean to be snarky, I think Prof Rosenberg has just discovered the Hard
Problem of Consciousness.

[https://en.wikipedia.org/wiki/Hard_problem_of_consciousness](https://en.wikipedia.org/wiki/Hard_problem_of_consciousness)

------
ny2244111
Just a developer, but is this author subtly arguing that in the end,
behaviorism has some sort of validity?

~~~
ex3xu
The weird part is that he specifically mentions behaviorism:

> Except for a brief period when psychologists embraced a wrongheaded
> behaviorism, the theory of mind everyone shares drove the 20th century
> research programs of child psychology and psychiatry, cognitive science,
> evolutionary anthropology, and neuroscience.

But then by the end goes ahead and suggests the premises of behaviorism as a
conclusion...?

> By colonizing consciousness spoken language turned it into a monologue of
> silent speech, tricking us that the meaning of spoken words is given by
> thoughts’ content when its just silent sounds passing through consciousness.
> Neuroscience shows that that in our brains the neural circuits neither have
> nor need content to do their jobs.

Maybe I'm misunderstanding something somewhere? Is this guy really the one in
charge of the philosophy department at Duke?

------
starbeast
Combine the two and then you get those rat brained robot things. Gives you a
nice little shortcut to true awareness in AI, by cheating and using meat to
tell the silicon what to do. [https://www.technologyreview.com/s/401756/rat-
brained-robot/](https://www.technologyreview.com/s/401756/rat-brained-robot/)

------
hnuser355
Neuroscience is the most powerful science, because the human brain is what
does science in the first place.

------
cirgue
I think that 'artificial intelligence' being used as a curtain for the Ozs of
the world to hide behind is a much greater threat than either of these things.

------
igravious
No.

~~~
lashkari
I read once that the answer to any article whose title is a question is always
"no."

In my own anecdotal experience over the past couple years, this no hypothesis
has been right about 90% of the time.

------
igolden
Can someone drop a TLDR for the at-work crowd? would be V grateful

~~~
dwaltrip
Humans have this seemingly innate or nearly innate theory of mind, where the
beliefs and desires of other people can be reasoned about in an effort to
understand or predict their behavior.

A few key neuroscience results suggest that the brain does not really work how
the "theory of mind" has us think it should. The concepts of belief and desire
that we use in our theory of mind, may actually just be crude heuristics that
were useful at some point in our evolutionary past, but don't actually have
explanatory power in the way most humans naturally try to apply them today.

I don't really know how strong this idea is. I think it is fairly speculative
in these early days of research.

My summary l may have a bit of extra color. I listened to Sean Carroll's
podcast with the article author, where this same topic was a big part of the
conversation.

~~~
Nevermark
That is a pretty good TLDR, in fact I think its better than the whole article.

But heuristics are real things and I have never heard any claims that "Theory
of Mind" assumed some particular neural circuit implementation.

So there is a false dichotomy if the article is to be interpreted as pitting
"Theory of Mind" against evidence that it isn't a simple circuit.

I don't think the article can be reduced to anything that actually makes
sense.

