
The AI delusion: why humans trump machines - sebg
https://www.prospectmagazine.co.uk/magazine/the-ai-delusion-why-humans-trump-machines-robots-artificial-intelligence-alpha-go-deepmind-marcus-davis-koch-mitchell-review
======
ctoth
Recognition of the powerful pattern matching ability of humans is growing. As
a result, humans are increasingly being deployed to make decisions that affect
the well-being of other humans. We are starting to see the use of human
decision makers in courts, in university admissions offices, in loan
application departments, and in recruitment. Soon humans will be the primary
gateway to many core services. The use of humans undoubtedly comes with
benefits relative to the data-derived algorithms that we have used in the
past. The human ability to spot anomalies that are missed by our rigid
algorithms is unparalleled. A human decision maker also allows us to hold
someone directly accountable for the decisions. However, the replacement of
algorithms with a powerful technology in the form of the human brain is not
without risks. Before humans become the standard way in which we make
decisions, we need to consider the risks and ensure implementation of human
decision-making systems does not cause widespread harm. To this end, we need
to develop principles for the application for the human intelligence to
decision making.

[https://behavioralscientist.org/principles-for-the-
applicati...](https://behavioralscientist.org/principles-for-the-application-
of-human-intelligence/)

~~~
jodrellblank
Myron Aub, a low grade Technician, discovers how to reverse-engineer the
principles of pencil-and-paper arithmetic by studying the workings of ancient
computers which were programmed by human beings, before bootstrapping became
the norm—a development which is later dubbed "Graphitics". The discovery is
appropriated by the military establishment, who use it to re-invent their
understanding of mathematics. They also plan to replace their computer-
operated ships with lower cost, more expendable manned ships to continue the
war.

Aub is so upset by the appropriation of his discovery for military purposes
that he commits suicide. As Aub's funeral proceeds, his supervisor realizes
that even with Aub dead, the advancement of Graphitics is unstoppable. He
executes simple multiplications in his mind without help from any machine,
which gives him a great feeling of power.

[https://en.wikipedia.org/wiki/The_Feeling_of_Power](https://en.wikipedia.org/wiki/The_Feeling_of_Power)

------
deathanatos
> _In some circumstances it has been surpassed already. It’s not an unfamiliar
> experience today to wonder whether we’re interacting online with a human or
> an AI system, and even alleged experts have been taken in by bots like
> “Eugene Goostman,” which, posing as a Ukrainian teenager, fooled a panel of
> Royal Society judges in 2014 into thinking it was human._

No, the Turing test has _not_ been surpassed. The "panel" was not fooled;
_one_ judge on the panel was fooled. And how, I don't know. The sample dialog
on the Wikipedia page for that bot is the sort of pointless trivia questions
that a bot can answer and a human can get wrong, and it won't tell you
anything about who you're interrogating. If you instead try holding context
across multiple inquiries, the charade will quickly reveal itself,

(">" = human, "<" = bot's reply.)

> Hello.

< Nice to meet you! Please tell me how you found me and what you'd like to
discuss :-)

> Through Wikipedia. Tell me about somewhere you've visited.

< May I ask you about the real reason of your interest? Maybe, let's talk
about something else? What would you like to discuss?

> Curious about your history. Why do you not want to discuss it?

< How do you know? I thought nobody could guess that! Could you tell me what
are you? I mean your profession.

The previous reply is nonsense, and the constant evasion is typical of chat
bots like Cleverbot. _Clearly_ bot. I know I'm not blind, but I think we're
kidding ourselves to equate this with the dialogue capabilities of a human. It
took 3 queries to get to that. Even more recent attempts, like AI dungeon,
fall apart after 2-3 queries that require context and understanding.

~~~
perl4ever
On the other hand, I regularly get phone (sometimes email) support that seems
to have similar flaws and I know (I think I know) it's humans. It seems quite
dramatic the difference when the person on the other end does handle context
and analytic thinking.

Maybe there's a bifurcation going on, and some people are converging with AI
and others are not?

~~~
dragonwriter
> On the other hand, I regularly get phone (sometimes email) support that
> seems to have similar flaws and I know (I think I know) it's humans.

Conversational bots for support are increasingly common, and the voices are
getting better, and even outside of actual conversational bots humans
following flowcharts that essentially render them into conversational bots
have been a thing for quite a long time.

~~~
perl4ever
By "conversational bots", you mean the sort that say "I'm sorry, I didn't
quite catch that" in response to most things? I'm not talking about that. I'm
talking about being able to discuss a topic briefly but not focus on the right
details in order to be responsive.

------
nohat
Koch (or perhaps just the reporter quoting him) contradicts himself. Even in
his own definition of consciousness a machine architecture merely needs a
feedback loop to be conscious, something hardly unheard of in computer
programs. Now arguably that definition isn't terrible because human
consciousness does seem like a supervisor -- something that synthesizes all
the subprocess work and makes sure it has a coherent story.

~~~
skosch
Don't worry: Tononi (and, by extension, Koch) have thought about IIT much more
than the author of this article.

You are correctly pointing out the conflict between computationalism
("consciousness is what certain computations feels like from the inside") and
physicalism ("consciousness is what certain computations feel like from the
inside, when they take place on physical substrates, perhaps requiring the
involvement of quantum decoherence").

One of the best primers on the current state of academic hypotheses on these
topics is the whitepaper "Principia Qualia" by the good folks at the Qualia
Research Institute [0].

[0]
[https://opentheory.net/PrincipiaQualia.pdf](https://opentheory.net/PrincipiaQualia.pdf)

------
oklol123
Don’t want to drag down the enthusiasm, but we really should distinct between
data analytics, actor systems, etc. and of course „artificial intelligence“.

~~~
masswerk
Also, we should make a distinction between multi-layered, hierarchical
information, classification and decision finding systems in general and a
possible machine implementation thereof. Those systems have shown remarkable
performance when operated by humans, e.g., the Dowding System [0]. (I don't
think that any ML/AI systems are able to surpass this currently.)

[0]
[https://en.wikipedia.org/wiki/Dowding_system](https://en.wikipedia.org/wiki/Dowding_system)
(The article contains only a quite crude and basic representation of the
operations of the filter room.)

~~~
oklol123
The point is that looking at artificial intelligence like this is naive. There
are some things Actor systems excel, but fail at others. There are some things
NN can do pretty well, but suck at others. Comparing humans to all
applications of AI is naive and no definitive answer can be given, because it
really depends. I think that this will be the status quo in the future.

~~~
masswerk
This was more about comparing multi-layered systems to single actor ones and
about us more or less ignoring, what humans could achieve and historically
have been achieving with those, rather taking multi-layered information
processing as a defining characteristic of algorithmic systems. (Actually,
this isn't new, it had been deployed with massive success, and we are
repurposing it with mixed results.)

------
philipkglass
_In Koch’s picture, then, the Turing Test is irrelevant to diagnosing inner
life. What’s more, it implies that the transhumanist dream of downloading
one’s mind into an (immortal) computer circuit is a fantasy. At best, such
circuits would simulate the inputs and outputs of a brain while having
absolutely no experience at all. It will be “nothing but clever programming…
fake consciousness—pretending by imitating people at the biophysical level.”
For that, he thinks, is all AI can be. These systems might beat us at chess
and Go, and deceive us into thinking they are alive. But those will always be
hollow victories, for the machine will never enjoy them._

This is really damning humans with faint praise. "Machines may eventually do
every job better than we can, and be immortal, but I promise that humans will
remain superior in some completely undetectable way."

~~~
lachlan-sneff
I've never understood why people feel that way. As far as I can tell, there is
no evidence that consciousness is not able to be emulated by a machine.

~~~
axguscbklp
A machine could emulate consciousness in the sense that we could probably in
principle build a machine that acted, viewed from the outside, as if it were
conscious. But we have no way to measure whether something is conscious or not
so we would never really know if the machine actually was conscious, had
interiority, had qualia, etc. (three different ways of saying the same thing).

There is no good reason to believe that consciousness is reducible to physical
phenomena. I think that intelligence is almost certainly reducible to physical
phenomena, but consciousness? No. Consciousness is a mystery that quite likely
will forever be beyond the reach of physical investigation.

~~~
MikeSchurman
By the same argument I'll never know if other humans are conscious.

~~~
gfodor
This is clearly the case today, pending some new discovery or intellectual
leap surrounding consciousness. For all I know, you are unconscious, and your
purpose was to type that comment so as to trigger my realization I am the only
conscious being in existence :)

(Not a good way to live your live, but a theory consistent with the evidence.)

------
gfodor
Brain-neural interfaces or other high bandwidth sensory tools (like VR) seem
to be a good path to start answering the falsifiable hypotheses around
consciousness. It seems imminently possible to link minds, either physical
minds or a physical-virtual duality, and start experimenting with the qualia
that emerge. I’d conjecture that a highly dedicated individual today, with
clever hacking, could actually perform crude experiments using available
hardware to deduce if they can shift their consciousness to a form of duality
with a virtual agent by sufficient sensory override.

~~~
kranner
> if they can shift their consciousness to a form of duality with a virtual
> agent by sufficient sensory override.

Can you explain what you mean? It seems we already have this with naïve use of
consumer VR devices on most VR experiences.

For instance, while playing Richie's Plank Experience, the player already
feels qualia due to:

1\. virtual self in virtual world: looking down the plank in mid air where the
plank is virtual and being in mid air outside a tall building is virtual

2\. physical self in physical world: if a friend nudges you from behind (or
out of sight of virtual self), or you have a fan blowing wind that would
correspond to the breeze you would expect to feel in the virtual world, or
you're walking on a real plank to which the virtual plank has been
resized/calibrated (this one is a true hybrid real+virtual 'quale': you see
the virtual plank but feel the real plank with your feet).

If you meant the virtual agent would have separate agency from the player,
please explain as I don't quite follow.

~~~
gfodor
Well, there are a variety of theses around if one could transfer ones
consciousness into a computer. Presumably, if this were possible, it would
also be possible to _partially_ transfer ones consciousness into a computer.
Now, this is going to be a subjective measure (at least for now, unless we
find a way to measure consciousness.) However, if someone were able to self
report that their consciousness were transferred into a computer, that would
at least be something. So, inductively, if a person were able to self report
that their consciousness were _partially_ transferred into a computer, in some
form or another, that would be some evidence a more meaningful transfer could
occur. So the question reduces down to what mechanisms exist today that would
potentially allow such a partial transfer.

I think a huge gate opens if you get a high bandwidth brain/computer
interface. But until then, an experiment like the following may give us a
glimpse of the tractibility of such transfer:

\- First, you need a full audio and visual override of one eye and one ear
(basically over-ear headphone in one ear, and over-eye VR goggle.)

\- You also need to be able to track _something_ on a person that a) is
underutilized in the 'real' world and b) has sufficient bandwidth to convey
some kind of agency of control over a virtual agent. For example, you could
wire things up so a person's toe orientations could measured, or internal
teeth motion, or subvocalization, etc. You _could_ have dual-use mechanics,
like a dedicated hand controller that still gives you freedom to use your
hand, but I'd argue this might compromise the experiment.

\- Combined with these two, with the hardware on and the inputs working,
connect this person bi-directionally with a virtual agent in a simulation.
(Basically a modern first person video game would probably suffice.) The video
and audio feedback is obvious, you'd need to come up with a mapping of the
inputs. Give the person some goals to accomplish in the simulation that are
non-trivial and take concerted mental and physical effort to do.

At the end of say a months worth of this dual immersion in both the physical
body and the virtual, have the person self-assess their conscious state. I
think it's even odds that they feel like they are playing a video game still,
or are actually manifested consciously as inhabiting two places at once,
whatever the hell that means. The experiment is predicated on the idea that
brain plasticity would start to allow the person to experience a level of
conscious experience and control over the virtual agent similar to the
physical world, since their sensory inputs and agency is at some level of
parity in time and bandwidth between both physical and virtual. It's far from
total parity, but it's well beyond what has been possible to do in recent
history, so may be past the tipping point necessary for changes in conscious
experience.

If the latter were to occur, and the person feels a duality of presence, the
subjective experience of say, closing off the person's other remaining senses
which are attached to the physical world (ie their free eye and ear, etc.) and
waiting another month, may actually result in a representitive experience of
what a consciousness transfer would feel like.

~~~
candiodari
No you don't: Have you seen the TV show "Caprica" ? Part of the plot is that a
girl creates an artificial afterlife by duplicating people upon their death
from data collected during their life, ironically accidentally killing herself
in the process. Technically that's a spoiler, but it's put sufficiently
misleading I hope, plus it's revealed in the first 20 minutes of the first
episode.

And this can work. You could copy a human merely by observing them. If you get
enough data on a person (say yourself, so it isn't creepy), and train an
algorithm. Once the algorithm is good enough at imitating you that your own
mother can't tell the difference between a robot containing that algorithm and
you, is there still a difference. And now every mathematician should say "No".
After all, 4 and IV is the same number.

You don't need audio or visual override. You don't need to give any agency to
any flesh and blood over a virtual body. Even the robot mentioned before is
really more of an optional thing, you could eliminate it.

You could simplify further. You don't need to duplicate a human, you just need
to provide them with "a kid". An algorithm that's purely virtual but that is
sufficiently smart that humans can raise it to become part of human society.
But it's mind can start out mostly empty. So you don't even need to duplicate
anything, you just need a sufficiently complex learning algorithm and a way to
interact with it, and a human (perhaps preferably a couple?) to teach it, if
you're willing to give it a generation or two.

~~~
perl4ever
"Once the algorithm is good enough at imitating you that your own mother can't
tell the difference between a robot containing that algorithm and you, is
there still a difference. And now every mathematician should say "No". After
all, 4 and IV is the same number."

I don't understand what you are talking about. Do you think identical twins
share the same consciousness? Is a clone the same person as who it was cloned
from?

It seems to me that "4" and "IV" are abstract concepts that have a basically
unlimited number of relationships with the real world, like tentacles. The
mathematical concept of _four_ is just one of those linkages.

~~~
candiodari
The point of the exercise was to prove that something non-human can be
conscious at all. So if it "shares the same consciousness" that's great. If
not, still good enough.

------
Rury
>It will be “nothing but clever programming… fake consciousness—pretending by
imitating people at the biophysical level.” For that, he thinks, is all AI can
be. These systems might beat us at chess and Go, and deceive us into thinking
they are alive. But those will always be hollow victories, for the machine
will never enjoy them.

Mostly agree with Koch, but I'd take it a step further...

There's major problems behind the concepts of AI and even intelligence itself
- and it's difficult to articulate why. It’s as if these terms require
aggrandizing to the point of impossibility or they lose all their apparent
meaning. Which is why I feel we'll never achieve what we call (Strong/General)
AI, or if we do, we will always find ways to be unimpressed by it...

I mean, is it that absurd to consider that the ideal concept beholden to
intelligence isn't a reality - even in humans? If you pull back enough layers
on how or why humans think or do the things they do - we arrive at things we
can't explain. We don't know what causes intelligence and have trouble coming
up with an adequate definition for it; similar to the concept of life. For all
we know we might be just highly complex biomechanical machines operating on
stimuli, analogous to what current computers already do. Where's the fine line
between making something conscious/unconcious?

~~~
missosoup
[https://en.wikipedia.org/wiki/Philosophical_zombie](https://en.wikipedia.org/wiki/Philosophical_zombie)

There's no discernible difference between p-zombies and 'real' conscious
beings. There's a good chance that we're all p-zombies and a distinction
between zombie and real doesn't exist.

> If you pull back enough layers on how or why humans think or do the things
> they do - we arrive at things we can't explain.

Sounds a lot like a magical argument. There's no evidence that anything about
the way human minds work is fundamentally unexplainable.

~~~
Rury
>There's no evidence that anything about the way human minds work is
fundamentally unexplainable.

Agreed. What I'm implying here is if we come circle to fully explaining how
the brain works, then in theory we could predict how someone will function if
given all input conditions (whether this is practical is another matter). If
this is true, then wouldn't we just be biomechanical robots operating on
stimuli? Would we be any more intelligent than say a rock, that similarly just
reacts to physical stimuli? Or are we just a more complex rock?

~~~
l_t
> If this is true, then wouldn't we just be biomechanical robots operating on
> stimuli

Yes, that's right. Which is one of the big problems with this line of argument
for many people -- how do you reconcile "biomechanical robot" with the
concepts of "free will" and "moral decision-making"?

I'm personally agnostic on the issue (I don't believe science is able to
answer this question yet -- or possibly _ever_.)

I do think it's interesting that you say " _just_ a biomechanical robot." The
"just" implies that being a robot -- a fancy rock -- in some way _isn 't
enough_. But in my mind, there's absolutely no (objective) reason to think of
a human as any better or more important than a robot, or a rock.

~~~
Rury
Yeah, sorry I don't mean to imply that.

I'm just trying to challenge people's assumptions of what makes something
intelligent. To me, it's not precisely clear as to what makes something
intelligent, and is why I think we've had trouble coming up with a
satisfactory test for determining when a computer is intelligent.

~~~
l_t
Gotcha -- that makes total sense. Thanks!

(P.S. Sorry for putting words in your mouth -- I don't want to imply you said
anything you didn't. My bad there.)

