
By 2029 no computer or “machine intelligence” will have passed the Turing Test - fhinson
http://longbets.org/1/
======
karmacondon
I was struck by the prescience and clarity of Mitch Kapor's writing. He
correctly predicted that a computer would be able to win on Jeopardy [0], 9
years before the fact. And he understood that the Turing Test was more of a
thought experiment than an actual test [1], which is why I assume he took the
bet in the first place.

Conversation is actually a rather poor measure of intelligence. I would say,
show me a computer that can learn _anything_ that it wasn't specifically
programmed to learn. This doesn't mean unsupervised categorization or learning
to play a video game. I'm talking about a scenario where programmers present
their code or machine with no knowledge of what the task will be. Not
something chosen from a known list of possibilities, but any task that a human
could conceivably be taught to perform in less than an hour. Anything from
writing a sonnet in iambic pentameter to assembling ikea furniture based on
instructions. A true test of _general_ intelligence.

I would take that long bet out to 2129, and beyond. I don't see software with
that level of intellectual flexibility being written in our lifetimes, or the
lifetimes of our children or their children.

[0] "While it is possible to imagine a machine obtaining a perfect score on
the SAT or winning Jeopardy..."

[1] "... a skeptic about machine intelligence could fairly ask how and why the
Turing Test was transformed from its origins as a provocative thought
experiment by Alan Turing to a challenge seriously sought."

~~~
ObviousScience
I think you presented a false comparison.

You said anything we can teach a human within an hour: do you mean a fresh
human that's only just been supplied with a brain? Because you can't teach a
baby anything besides basic stimulus response on that time frame, and I'm
pretty sure we can get computers to do that if we rig up the stimulus response
hardware in a way comparable to a newborn.

What I think you actually meant, and what I think is a completely rigged test,
is comparing a human with years of training and adaptive hardware modification
to a computer with absolutely no training in its ability to learn a new skill
or building out its knowledge base. That, of course, is a completely terrible
comparison.

I'll take your bet, and call it by 2050, if you're willing to compare a human
with 5 years of hardware and knowledge base training and adaptation to a
computer with 1 year of hardware adaption (eg, FPGA circuit reprogramming) and
1 year of knowledge base training.

~~~
geon
I assume the AI would be pre-trained to (or beyond) the level which could be
expected of an adult.

The "within an hour" part refers to the test itself.

~~~
ObviousScience
I'm willing to modify the bet for that: I think an AI with 20 years to be
trained and have things like FPGA components adjusted can learn in 1 hour
anything I could teach a person in the same amount of time with the same
amount of pre-training.

------
jakobegger
Can anybody point me to the papers where scientists have actually "reverse
engineered (...) regions of the brain" or present "highly detailed
mathematical models of (...) neurons"?

As far as I know, research in those directions is nowhere near as
sophisticated as Kurzweil tries to make us believe. The mathematical models
for neurons I've seen may reproduce some firing statistics, but they are not
at all suitable for actually modelling behavior of a system in response to a
stimulus.

~~~
im2w1l
While simulating a human brains should be _sufficient_ to pass the Turing
test, that doesn't mean it is _necessary_.

~~~
jakobegger
I completely agree with you.

However, Kurzweil's argument focuses on the fact that he believes we will
someday be able to simulate the human brain, and that's something I disagree
with strongly.

~~~
johansch
Saying that we will never be able to simulate a piece of hardware/wetware is a
pretty strong statement. (Unless you think there is some kind of magic inside
it.)

~~~
jakobegger
the obstacle is that systems composed of many simple elements quickly become
so complicated that we can't simulate them anymore, even if we would
completely understand the individual elements.

~~~
sumitviii
But we have working examples of that machine.

The problem is in completely understanding it. If we completely understand it,
then it shouldn't be hard to simulate.

~~~
jakobegger
i'd argue against that. Somewhere else in this thread someone brought up
weather predictions; even though we completely understand the physics of
weather, we can't simulate worldwide climate precisely for various reasons
(not enough information, not enough computational resources, chaotic
behavior). I think it's the same thing with our brain.

~~~
sumitviii
I may be wrong, but weather cannot be simulated because real version is
running over the complete globe. Brain, while hard to simulate with present
technology, still is a 3 pound thing. Computational resources obstacle will be
overcome.

------
theVirginian
The Turing Test is a thought experiment, not an actual test. No machine will
ever pass because that is fundamentally a misunderstanding of the Turing Test.

~~~
DSMan195276
Thank you for pointing this out. It always bugs me to no end when people talk
about 'passing the Turing Test', it just shows a huge lack of understanding on
what Turning was getting at.

The important part of the Turing Test isn't whether or not we can build a
computer to 'pass it', it's what is it that differentiates us from just being
complex computers that can spit-out the right answer when asked, and if there
actually is any difference. The question really is "If a computer can act
exactly like a human, to the point where people can't tell the two apart, what
exactly is the difference between that computer and a real human?". Most
people would say that the computer can't "think" and a human can, but if the
computer can shift bits around in such a way that it comes to the 'right'
response, what's the difference between that and 'thinking'?

~~~
newman8r
A fallacy is equating the ability to think alone with our human potential.
It's not just thinking that has built human civilization. Coming up with the
correct answer for any query is one thing - evolving from a single cell to
manipulate our environment and having consciousness spontaneously emerge and
then within a few thousand years discover many of the secrets of the universe
and on the verge of becoming a god-like species if we don't destroy an entire
planet first.

When the machines can do that then we can compare apples to apples.

Thinking is a nice feat and I think it's going to be solved in many of our
lifetimes. It's definitely something I'd like to research more at some point.

WE are the terminators - so... yeah.

~~~
icebraining
_A fallacy is equating the ability to think alone with our human potential._

But who equated them?

~~~
newman8r
Laypeople in general. I think that's part of the fallacy - the idea that a
'turing test' is some recognized standard for when computers are smarter than
humans. But as others have stated it works a lot better as a thought
experiment than something to directly pursue.

------
ForHackernews
They made this bet in 2002, so we're almost halfway to 2029. Does anyone
(other than Kurzweil) seriously think a Turning Test-passing machine is just
over the horizon?

(And no, contrived scenarios with computers pretending to be foreign children
don't count[0])

[0] [http://blogs.wsj.com/digits/2014/06/10/did-eugene-
goostman-p...](http://blogs.wsj.com/digits/2014/06/10/did-eugene-goostman-
pass-the-turing-test/)

~~~
modeless
I think there's an excellent chance that deep learning research will lead to a
machine that can pass the Turing test in the not too distant future. I can't
say if it will be within exactly 14 years or not, but if you've been following
the latest developments in deep learning, the path to get there is much more
clear today than it was even five years ago.

~~~
quonn
I think you are greatly overestimating what deep learning can do. In the 90s,
we could recognise digits accurately. Now we can do the same with traffic
signs even in bad weather etc. That is exactly the kind of progress that we
have made in 14 years. And let's not forget this is a manually tuned algorithm
for a particular problem.

It's great progress, but it is also a far cry from what humans can do and
there is no clear path at all to get there - currently.

~~~
sushirain
The recent advancements using RNN is much more than sign recognition. It's
about language generation, control and reinforcement learning, and attention.

------
restalis
The more I think about the Turing Test, the more flawed (or hackable) I see
it. What if instead of improving the computer I go the other way around and I
put a disabled human (like an autistic or something) behind the curtain? This
may certainly exhibit a very unnatural model of thought and make the computer
harder to identify. If however, such hack would be prevented by the fact that
the judge is the one that chooses his human subject for the test, like they
knowing each other to some degree, then the test becomes more of a challenge
to recognize the specifics that one particular person may have in relations
with not only computers but other humans as well!

------
dnautics
The turing test must have an adversarial component. E.g. For any competition
with X entrants, the computer candidate must be compared against _one of the
other human entrant team member_ , randomly selected. If the other entrant is
(correctly) identified as the human, a fraction of the year's prize, say 1/X,
goes to the adversarial team, and the candidate is barred from winning that
year.

------
ilaksh
If you change it to a five minute interview instead of two hours then less
than five years.

Anyway it will happen pretty soon and then people will just say it wasn't a
good test.

------
sushirain
If a computer can convince judges that it is human, then it can also convince
judges that it is sentient. If it is convincingly sentient, is it moral to
program it?

------
halviti
Isn't anyone else curious about this "long now foundation"?

I mean for one thing the bet is just as much about the turing test as it is
whether or not you believe this foundation is going to exist in 2029.

It also seems like they take all the bet money and then invest it while
they're waiting to pay out. Seems like a pretty sweet deal.

~~~
wpietri
Hi! I'm a Long Now member, and a dozen years ago I was the person who wrote
the Long Bets code.

We actually host some related bets and predictions, including:

"The original URL for this prediction will no longer be available in eleven
years." [http://longbets.org/601/](http://longbets.org/601/)

"The Long Bets Foundation will no longer exist in 2104."
[http://longbets.org/137/](http://longbets.org/137/)

Investing the money is definitely part of what makes this interesting. Thanks
to compound interest, some truly large sums could be at stake by the time the
bet is resolved. (We also couldn't do it any other way, as a number of the
bettors are unlikely to be around when the bets are resolved.) The money,
though, is not held directly by the Long Now. It's in a special account set up
with The Farsight Fund of Capital Research and Management Company. That's
mentioned here: [https://longbets.org/about/](https://longbets.org/about/)
[http://longbets.org/faq/](http://longbets.org/faq/)

As to whether it'll exist in 2029, I'd say the odds are good. The Turing bet
is a 27-year bet and we're already nearly halfway there. But if you don't
think so, I will be entirely glad to bet against you. ;-)

~~~
wpietri
Oh, and I encourage everybody to come by The Interval:
[http://theinterval.org/](http://theinterval.org/)

The Long Now for years had a little museum space that got only a modest number
of visitors. But conversation about the long term is their goal, and they
realized that coffeehouses and bars are where a lot of good conversation
happens, so they converted the museum into a cafe during the day and a bar at
night.

If you're ever in San Francisco, it's a great nerdy tourist stop. It's in Fort
Mason, on the north edge of San Francisco between Fisherman's Wharf and the
Golden Gate Bridge.

------
onthefudge
Unless they know about genderless no form-factors.

~~~
SwellJoe
The only other search result for the phrase "genderless no form-factors" is an
HN comment from four days ago, also by a brand new user (not the same
username, but I'm guessing the same user). It is, as far as I can tell, a
nonsense phrase.

What are you trying to say, and can you say it in English?

------
ForHackernews
> or The Kurzweil Foundation if Kurzweil wins.

That modesty, though.

~~~
tga_d
Mitch Kapor is one of the founders of the EFF; both charities are self-serving
in that regard.

