
When Will Machines Exceed Human Performance? Survey of ML Researchers. - jtraffic
https://arxiv.org/abs/1705.08807
======
nopinsight
Why is this incongruence?

Only 20% of respondents expect "Chance of global technological progress
dramatically increases after HLMI" happening 2 years after HLMI is achieved,
while 80% picks the other choice, "30 years after". (Table S4)

Here is the definition of HLMI from the survey: "High-level machine
intelligence (HLMI) is achieved when unaided machines can accomplish every
task better and more cheaply than human workers."

It appears to me that if machines or software, which can be replicated
billions of times in the span of two years, can do _every_ task better and
cheaper than humans, it is akin to having 100+ times more active researchers
working on R&D with much higher bandwidth of communications among them than
human researchers do.

It is true that we might be limited by computer hardware availability, but
given that the median time of HLMI predictions is 45 years from 2016, we are
unlikely to be limited by hardware then.

Another possibility is that most predictors believe they will be limited by
the speed of physical experiments, my answer is that smart simulations should
allow HLMI to perform many experiments without waiting for their real-world
results. A recent paper from OpenAI has shown us that learning in simulations
can be effectively transferred to solving real-world tasks.
([https://blog.openai.com/robots-that-learn/](https://blog.openai.com/robots-
that-learn/)) In 45 years, the quality and scope of simulations would be far
better than in 2016.

~~~
highd
It's sort of a weird definition for HLMI since it involves cost. I.e., if you
can replace an engineer but leasing the hardware, power etc. costs more than
the engineer's salary it doesn't count?

This is an important point, because ignoring that it could be the case that
the first HLMI requires significant computational power to run - definitely to
train, I'd expect, at least. So you can have a 5x human agent on some super
computer but you have to build 1000 of those before you start talking about
significant impact.

------
mannykannot
Amusingly, the median response for 'AI researcher' is almost 40 years after
'all human tasks'. I am not sure that those being surveyed shared a common
understanding of what was being asked.

~~~
AnimalMuppet
And also almost 40 years after 'Math researcher'. The AI researchers do seem
to hold what they do in high regard...

~~~
esrauch
I mean, AI research seems like it will be the last to go by definition: as
long as there is anything else around to be taken over by AI then that is a
justification that the AI researchers aren't done right?

~~~
AnimalMuppet
But AI research goes when the AIs are capable of doing the AI research, not
when every last AI question has been researched. Why will that take nearly 40
years longer than math research?

~~~
tyingq
Deliberate poisoning of the training data.

~~~
AstralStorm
That cannot be filtered out by high level mathematical logic? Imaginary. ;)

------
chicob
Experts are known to be bad predictors of the future outcome of their fields.
Many times these predictions obey the Gaes-Marreau law.

In the case of AI, according to one particular study, something similar
happens: expert predictions are contradictory, indistinguishable from both
non-expert predictions and past failed predictions.

[https://intelligence.org/files/PredictingAI.pdf](https://intelligence.org/files/PredictingAI.pdf)

~~~
rhaps0dy
I cannot find anything on Google about the "Gaes-Marreau Law" or "gaes-
marreau".

Could you please link me to something that explains it?

~~~
gwern
It's 'Maes–Garreau law'. Kelly made up a 'law' based on a few AI predictions
falling into a certain range; but you can see from the linked paper (which
uses like 20x more predictions) that it's not really that accurate a law and
there's more that goes into AI forecasts than just +X years. (For example,
China vs the West in OP, which is interesting and makes sense thinking about
it, but hadn't occurred to me.)

~~~
chicob
Yes, it's not actually a law, but a (perhaps cynical) heuristic.

Possible interpretations: * People do not want to sound too optimistic but
they hope to see some improvements before they die; * People hope they are
dead before someone proves them wrong; * People deal with temporal magnitudes
in the order of the human lifespan and close multiples.

The last one, and then the first, are my guesses.

------
thomnific
I like this, but I feel it's a little optimistic (or pessimistic depending on
your view). Isn't asking ML researchers when AI will dominate human
performance a bit like asking a barber if you need a haircut?

~~~
aisofteng
Who would be better qualified to give an estimate?

~~~
tedsanders
One aspect of qualification is domain knowledge, which experts certainly have.
Another aspect of qualification is calibration, which can only be proved &
adjusted over time with a track record. A number of academic studies of
prediction markets and other forecasting systems have shown that well-
calibrated non-experts, with no skin in the game, often do better than actual
experts, who often have poor track records as a result of incentives (or
selection) to hype and extremize.[1]

Philip Tetlock has written on this topic for years. Two of his books are
Expert Political Judgment and Superforecasting.

[1]:
[https://en.wikipedia.org/wiki/The_Good_Judgment_Project](https://en.wikipedia.org/wiki/The_Good_Judgment_Project)

Edit: So to directly answer your question, rather than AI experts, I'd prefer
technology experts (AI or otherwise) with a track record of well-calibrated
predictions.

------
briga
These studies are always interesting, but I don't think they have much more
scientific validity than, say, asking a bunch of religious fundamentalist
preachers when the second coming of Jesus is going to be. No one knows how
difficult it's going to be, and while we've overcome a lot of challenges, I'm
positive there are many more to overcome before we get to human-level
intelligence in computers. Whatever that means.

~~~
Houshalter
What would it mean for a prediction to have scientific validity? No one can
know the future of course. But if you are going to try to make a prediction,
surveying a bunch of experts is probably the best strategy. If nothing else,
the wisdom of crowds phenomenon shows the average of a bunch of wild guesses
is often surprisingly close to the correct answer.

~~~
briga
All I mean is that these predictions have to be taken with a grain of salt. I
feel like there's an inherent assumption here that the scientific and
political landscape will remain unchanged and progress will continue at or
exceed it's current pace. There are plenty of things that could prevent this
from being true -- nuclear war, economic/political collapse, natural disaster,
to name a few. I do hope they're accurate -- mainly because I think it would
be cool to see human-level AI in my lifetime -- but at the same time I remain
skeptical

------
kmicklas
It's funny how surgeon is listed as the farthest out application of in the
abstract. I think surgery is in fact the easiest of all the listed jobs in an
AI sense, but it might depend more on advances in robotics.

------
jtraffic
I modified the title for accuracy. The original title misleads, slightly, IMO:
"When Will AI Exceed Human Performance? Evidence from AI Experts." I swapped
AI out and replaced it with ML.

The paper itself uses the acronym HLMI (high level machine intelligence).
Quoting:

"High-level machine intelligence (HLMI) is achieved when unaided machines can
accomplish every task better and more cheaply than human workers."

So a collection of machines could accomplish HLMI, without needing any single
machine to do it alone.

~~~
Houshalter
I disagree with your modification. They are definitely talking about actual
AI. Computers that are actually generally intelligent and can do every task
humans can do.

~~~
vacri
The only bit of the book _Superintelligence_ that I liked was the historical
bit at the start, where it described the problem AI researchers have: as soon
as they achieve a milestone in AI, the goalposts shift and that milestone is
no longer considered an AI marker.

The simplest example was "can make a human believe they're talking to another
human", which was achieved with ELIZA. Before then, what ELIZA could do was
"AI". After, it was "just this thing, you know"...

------
tormeh
I think explaining your own actions in games is a weird thing to ask for. It
requires "aboutness" (that you're thinking about the problem). Aboutness is a
really inefficient way to handle problems, but it's handy because we can apply
it to all new situations, because we have general intelligence. Conversely,
when humans have trained hard at a task, they generally lose aboutness, like
an ANN. Things are done on instinct, feeling etc. In short, the NN has been
trained, and general intelligence is no longer required to do the task.
Indeed, it's been superseded.

More damningly for this kind of survey: Aboutness for a single task is not the
same as general intelligence. And it's general intelligence that we want.

~~~
AnimalMuppet
But it seems to me that general intelligence (or perhaps consciousness) is
"aboutness" for your own thinking.

------
return0
We should use deep learning to find out when

------
canada_dry
I agree we're still a few orders of magnitude behind on the myriad of
technologies that will enable the terminator like robots... however, the
exponential progress we've been making in computer science (e.g. machine
learning) is making the likelihood of discovering these critical bits very
realistic.

------
paulkrush
These people are not subject matter experts in these fields...

An interesting question would to have them consider the location and the IP
environment. Will 10% of the public have their laundry folded by AI in the
east or west first? Will it be wrapped up in patents?

------
rayiner
If you asked aerospace engineers what they thought of the future in 1960 they
would've said we'd have Mars colonies and asteroid mining would've
revolutionized our economies.

~~~
1_2__3
I'm developing a theory that America's thoughts on what technology is capable
of swings wildly between two poles (possibly with every generation?): A strong
luddite streak that pooh-pooh's technology, followed by a ridiculous fantasy
that technology can do everything. We're firmly entrenched in cycle "B" right
now; their Martian colonies is our true AI.

We think everything is just around the corner because so much has changed over
the last few decades, without realizing that those changes have only come in
certain areas, while the rest of the technical world is proceeding along at a
much slower, more methodical pace.

I saw an ELI5 post the other day from someone on Reddit asking what air
traffic controllers did, that software couldn't do better. I actually had to
sit for a moment and ponder the person (almost certainly a youth, admittedly)
who thinks we're already at the point on the futurism curve where the task of
safely coordinating air (or any!) traffic is better done by a computer than a
person. They just couldn't wrap their heads around the idea that a group of
trained people, in 2017, with advanced software and visualization tools at
their disposal, might be better at that than a computer acting on its own.

The example fits elegantly because I do think AR is in our future (and our
present) and I'm absolutely thrilled about what it's going to bring to the
world. But the idea that we're going to replace (waste) the meat computer in
our heads - let alone that we _can_ \- within the next few... What, years?
Decades?

Anything in that timeline seems ridiculous to me, and not because I can't
imagine such an incredible and future, but because I know how the technology
works, I know how far away it is from replacing (not augmenting, which again
is today and I think has a rich future ahead of it) human brains and senses.
Yes, automation is going to replace all our jobs, and also the sun is going to
burn up someday. We need to prepare for both - arguably the former more
adroitly than the latter - at a pace that makes sense both for humanity today
and humanity in the future.

------
BatFastard
Doesn't machine performance already exceed human performance in a number of
areas? As was just demonstrated this week when AI beat the world's beat Go
player?

~~~
throwawaymsft
From the paper:

> Defeat the best Go players, training only on as many games as the best Go
> players have played. For reference, DeepMind’s AlphaGo has probably played a
> hundred million games of self-play, while Lee Sedol has probably played
> 50,000 games in his life.

~~~
6nf
Seems a bit unfair. Go player study games of other players too, they don't
just rely on personal experience.

~~~
majewsky
But it's still much less than 100 million games. If I assume that playing a Go
game (or looking at a record and understanding the moves taken) takes one hour
(which is a wild guess since I'm not familiar with the game), 100 million
games take nearly 11408 years.

------
AnimalMuppet
11 or 12 years before a bipedal robot is human-equivalent in a 5K in a city?
I'm skeptical, if only for reasons of power supply.

~~~
Houshalter
Drones can travel pretty far. And they are a lot less energy efficient than a
walking robot, because they need to constantly expend energy to fight gravity.

~~~
AnimalMuppet
I just realized that I was assuming battery-operated. If you make it gasoline-
powered, then I can see it.

~~~
kpil
I can't imagine a more depressing future than one where the streets are filled
with two-stroke terminators running errands for rich people!

------
hprotagonist
we are, in general, really bad about predicting what is technologically
solvable in a given timespan.

They thought machine translation would be solved in 5 years in the 60s, too.
I'm vastly more skeptical.

