
Why Alan Turing Wanted AI Agents to Make Mistakes - headalgorithm
https://spectrum.ieee.org/tech-talk/tech-history/dawn-of-electronics/untold-history-of-ai-why-alan-turing-wanted-ai-to-make-mistakes
======
Traster
I think what Turing is getting at here is actually less a commentary on what
AI should do, than a shortcoming in the test. When you're looking for genuine
intelligence you should be looking for intelligence. However, if your only
sign of intelligence is human intelligence, suddenly getting a basic sum wrong
is counter-intuitively a sign of intelligence.

In the end getting the sum wrong is an emergent property of human
intelligence, we're good at things but we're not deterministically perfect
like computers are. It's not better for a computer to start to share the
shortcomings of humans, but it is more likely to pass the 'AI' test.
Indicating a shortcoming in the test.

~~~
nine_k
That is, to pass for a human, the AI should imitate humans. This includes
imitating their mistakes. AI's genuine mistakes would likely be different
enough to help humans tell an AI from a human.

That is, the best strategy for an AI to pass the test is to be perfect, and
perfectly imitate a perfect amount of human mistakes.

This definitely _requires an ability to lie_ , to knowingly tell untrue
things.

This ability is also required by much simpler circumstances: when directly
asked, an AI should be able to lie and say "No, I'm a human", else it will
fail the test instantly.

This likely means that a robot following the Asimov's three laws would never
pass a direct Turing test.

~~~
antpls
> This likely means that a robot following the Asimov's three laws would never
> pass a direct Turing test.

Not necessarily true. An AI could pretend that it doesn't follow Asimov's laws
while in fact following them. For example, the AI could pretend that, to
execute the given order (second law), it first needs to understand the author
psychology. At that point the AI can change the author's mind by using
persuasive tactics. If the AI successfully changed author's mind, it
successfully didn't follow the initial order by simply cancelling it with
consent of the author

The first and third laws are also applicable to humans, and cannot be used to
detect the AI

------
RcouF1uZ4gsC
I think there is a tendency to confuse human-like with intelligent mainly
because humans are the only human level intelligences we encounter. This leads
us to associate human-like behavior with intelligence.

For example take a look at the Terminator movies. As this xkcd illustrates
([https://xkcd.com/652/](https://xkcd.com/652/)), a real killer robot would
probably look more like an autonomous flying drone than a humanoid.

One of the big reason we want AI is so we do not want mistakes. We do not want
human foibles in our AI. There is no one trying to make self-driving cars that
lose focus by texting or getting drunk just so it would seem more human. No
one is trying to make AI medical systems that give incorrect diagnoses because
they have been running for 30 hours.

As for me, I want AI to get around the limitations of being human, not to
pretend to be human by having human like errors.

~~~
alehul
(Note: Disregarding the possible negative outcomes of general AI on humanity
for the sake of this comment)

While I completely agree with you that we don't want machines to make these
human-like mistakes, wouldn't you agree that it would be ideal for a machine,
when it makes mistakes, to do so in a human-like manner, rather than not?

One thing that appears to separate humans from AI is that, even as an AI may
increasingly become better on average than humans at problems, games, etc., an
AI seems to always occasionally make some absolutely terrible decision that we
can hardly even understand the rationale behind. This is a huge flaw. In the
rare instance an AI makes a mistake, it's completely out of its mind (at least
to our knowledge) and off-base. When a human makes a mistake, it's one that's
closer to the mark.

Is this not a valid reason to make AI more human-like in its mistakes,
assuming it will have mistakes of _some_ kind?

~~~
losteric
> an AI seems to always occasionally make some absolutely terrible decision
> that we can hardly even understand the rationale behind. This is a huge
> flaw. In the rare instance an AI makes a mistake, it's completely out of its
> mind (at least to our knowledge) and off-base.

Some of the game-playing bots have made moves which seem like garbage in the
moment, only for a novel strategy to unfold in later moves. Is it possible to
achieve the novelty without the true garbage?

So I guess it depends on the work being automated. AI cars should not have
novel strategies, "just" follow the rules, while a strategic battle AI working
_with_ a human might be better with out-of-the-box thinking (and transparency
of reasoning for the human to check).

------
alok-g
If the AI agent were quick and precise to answer everything, wouldn't that by
itself allow a judge to tell a computer apart from humans?

I believe that faithfully passing the Turing test requires the machines to
copy human behaviors feature by feature, bug by bug. Likewise for the machines
to demonstrate a good human language understanding.

------
bellerose
I'm guessing perfect "AI" will simulate every possibility with the current
input sources and to find the best solution. Perhaps similar as our life by
the universe being a deterministic system and having slightly small edits
compared to other universes functioning with keeping track of one another.
What we call "mistakes or failures" are just a summation of events to witness
a certain outcome. All the outcomes of all the other universes with maybe our
own, hold one sacred outcome. Maybe you're experiencing it now or not.

