
Stanford to host 100-year study on artificial intelligence - bra-ket
http://news.stanford.edu/news/2014/december/ai-century-study-121614.html
======
ccvannorman
Headline from 2115: Stanford AI100 study results suggest a strong AI
breakthrough a mere 15 years away

~~~
dave_sullivan
Indeed. On the flip side, "AI" is a moving target--people used to think
mathematical proofs, chess, and jeopardy required "intelligence", but we now
know we just need software. I've heard people say "We've made no progress
towards AI", but I think that's as hyperbolic as saying "it's right around the
corner". Progress (in this case, using machine learning to build cool things)
isn't this on/off thing. There are worse fields of study to enter than machine
learning. You'll learn things that are broadly applicable. And maybe "real AI"
happens someday, this would be on par with invention of flight or moon
landing, both of which sounded batshit before someone (a bunch of people) did
it.

------
facepalm
Reminds me of a German legend I recently read to my child, of Till
Eulenspiegel. He traveled around in the middle ages and pranked people. Once a
couple of professors challenged him to teach a horse to read. He immediately
said "of course I can do it, but since horses are not very smart, it will
probably take 20 years". His thinking was that before 20 years the horse might
die, or the professors who challenged him might die, so he would be fine.

------
slg
Somewhat cynical question, but is the 100 year aspect of this study anything
beyond a marketing gimmick?

~~~
rguzman
Even if the study doesn't live as long as its name states, a clear mission is
useful because it guides people's decision making. A researcher in this should
be optimizing her efforts for the 100 year timescale and not for the ~5 year
one.

~~~
x1798DE
I don't see how that changes the individual researcher's incentive to make
achieving short term, publishable discoveries a priority.

~~~
dangerlibrary
One concrete example would be data acquisition. Long term, large sample panel
data sets take decades to produce, but are incredibly valuable if well
designed / executed.

If you are applying for funding as part of a 100 year study, you won't get
continued funding unless you put in the effort up front to design the data
acquisition correctly.

~~~
Someone
Another is what a project can offer your personnel. A four-year project will
almost solely have temporary jobs.

A hundred-year project will be able to offer more permanent assignments. That
creates less 'publish or perish' pressure. It also may attract scientists with
different character traits who wouldn't be able to compete in the rat race to
tenure.

------
nostromo
The study's final authors may not be human. As such, they will clearly be
biased.

~~~
ema
Well, humans aren't unbiased either.

------
drewda
Reminds me of a symposium I attended at Stanford in 2000 titled "Will
Spiritual Robots Replace Humanity by 2100?"[1][2].

(At the time, Bill Joy had written a provocative cover story for Wired
headlined "Why the Future Doesn't Need Us: Our most powerful 21st-century
technologies - robotics, genetic engineering, and nanotech - are threatening
to make humans an endangered species"[3] and this panel was assembled in
response.)

Glad to see that the backers of this new initiative agree with the more
nuanced folks on that panel, like John Holland--that these will continue to be
meaty questions to consider, both in computational and philosophical terms,
well past 2100.

[1]
[http://news.stanford.edu/news/2000/march29/robots-329.html](http://news.stanford.edu/news/2000/march29/robots-329.html)
[2]
[https://www.youtube.com/playlist?list=PLvW5zob1PPbbFUZK_LdzU...](https://www.youtube.com/playlist?list=PLvW5zob1PPbbFUZK_LdzU8iJBSFb2Zhyb)
[3]
[http://archive.wired.com/wired/archive/8.04/joy.html](http://archive.wired.com/wired/archive/8.04/joy.html)

------
bra-ket
[https://ai100.stanford.edu/](https://ai100.stanford.edu/)

------
stephengoodwin
The headline in 2040: 100-year Stanford AI study to add first Android
professor.

------
pcthrowaway
Someone behind this study clearly has little faith in the imminence of AGI.

~~~
chriswarbo
Well, it's been imminent since the summer of 1956 (
[http://en.wikipedia.org/wiki/Dartmouth_Conferences](http://en.wikipedia.org/wiki/Dartmouth_Conferences)
)

~~~
marcosdumay
Or, in other words, for about half the time those people are expecting no AI.

Never mind the Moore's Law putting computers more capable than our brains less
than 30 years away.

~~~
gweinberg
Nobody thinks Moore's law will last another 30 years.

~~~
dudus
Nobody thought it would last till here

~~~
jblow
See this graph:

[http://www.extremetech.com/wp-content/uploads/2013/08/CPU-
Sc...](http://www.extremetech.com/wp-content/uploads/2013/08/CPU-Scaling.jpg)

The green line was the only one still going, and it plateaued about a year ago
(you'd see that in a newer graph).

~~~
dysfunction
The green line is the only one that Moore's law is about. Power consumption
plateauing is a _good_ thing, clock speed was never going to be the primary
driver of performance increases long-term, and instruction-level parellelism
(green line) is _not_ a measure of performance-per-clock-cycle.

------
melipone
It seems to me it's more about the effects of AI on people and society than
about AI itself.

------
atrilla
Related to the impact of AI to society, especially wrt the increase of
unemployment. I was reading the comments of this MIT Tech Review
([http://www.technologyreview.com/news/533686/2014-in-
computin...](http://www.technologyreview.com/news/533686/2014-in-computing-
breakthroughs-in-artificial-intelligence/)) and I came across this opinion,
which gave me the shivers. Two points:

1) AI taking manual workforce-based jobs. I can't help seeing how beneficial
the industrialisation of processes has been for humanity. Instead of relying
on inaccurate human judgement for manufacturing jobs, we let machines produce
perfectly similar assets much better than we can do. This has increased the
reliability of the outputs, in addition to lowering the prices of the
products, which has made them affordable to many more people. Jobs get more
specialised, so like the tools human beings have developed throughout history.
Once more, survival entails adaptation. And this is again a matter of supply
and demand. In Spain, where the economic crisis is still hitting the markets
and unemployment, having a proper specialised education no longer guarantees
landing a job (and it's not because evil robots are doing the tasks of leaving
scientists).

2) AI taking over engineers, lawyers, etc. AI is difficult per se. Nobody
comes up with a human replica made of metal by chance. Things take their time,
and improvements are gradual. That's a matter of fact. At present, AI (plus
Machine Learning, Pattern Recognition...) delivers a set of tools that allow
us to see father, from the shoulders of giants. We had never been able to
digest the amount of data we are capable of doing nowadays. Isn't this
progress? We haven't yet created a creative machine and I don't see it coming
any time soon.

I am so firmly convinced that AI has so much good to do that I just created a
blog ([http://ai-maker.com/](http://ai-maker.com/)) solely dedicated to AI and
its applications, and I'm going to dedicate my spare time for the following
years to grow this side project into something awesome, because that's where
AI is leading us.

~~~
edanm
It might interest you to join the LessWrong community, or read up on the work
of MIRI (the Machine Intelligence Research Institute,
[https://intelligence.org/](https://intelligence.org/)).

You can start by reading a few posts at the site: www.lesswrong.com.

------
sixQuarks
The singularity or humanity will destroy everything within 50 years. If we
make it another 100 years, I'll be pretty surprised. Old, and surprised that
is.

------
ajarmst
Damn it. I am, like, NEVER going to finish this dissertation.

------
vph
100 is a nice, round and even number. It's also a marketing ploy. Truth is
none of these guys can make an accurate prediction about what AI can do in 20
years.

~~~
killerdhmo
I don't know the researchers in question but Turing's paper [0] was
surprisingly predictive in 1936, decades before any of this came to light.
It's not out of the question that they could set the direction of research for
a decade or two in the future.

I agree, 100 as a number is a marketing ploy. But I like the idea of a
sustainable long term mission, rather than the funding rat race and trend
chasing you tend to see.

[0]
[http://www.abelard.org/turpap/turpap.php](http://www.abelard.org/turpap/turpap.php)

~~~
marcosdumay
> I believe that in about fifty years time it will be possible to programme
> computers with a storage capacity of about 10^9 to make them play the
> imitation game so well that an average interrogator will not have more than
> 70 per cent chance of making the right identification after five minutes of
> questioning.

He was mostly right, out of only some 25 years, 50% more. A really great
estimate for something that changed so fast. But just next:

> Nevertheless I believe that at the end of the century the use of words and
> general educated opinion will have altered so much that one will be able to
> speak of machines thinking without expecting to be contradicted.

That was quite off the mark.

Anyway, Turing's prediction was for 50 years in the future, that's orders of
magnitude easier than 100 years in the future. And nearly all of the
predictions by that time were completely wrong, what makes you think those
people are the ones of our time that'll get their predictions right?

~~~
Raphael
Playing Chess against Stockfish, there's an setting for "thinking time".

------
samteeeee
Wintermute

