
One Hundred Year Study on Artificial Intelligence: 2016 Report - skurilyak
https://ai100.stanford.edu/2016-report
======
Animats
It's a surprise-free document. It could have read roughly the same in 1985,
but different technologies would have been mentioned.

The big change in AI is that it now makes money. AI used to be about five
academic groups with 10-20 people each. The early startups all failed. Now
it's an industry, maybe three orders of magnitude bigger. This accelerates
progress.

Technically, the big change in AI is that digesting raw data from cameras and
microphones now works well. The front end of perception is much better than it
used to be. Much of this is brute-force computation applied to old algorithms.
"Deep learning" is a few simple tricks on old neural nets powered by vast
compute resources.

~~~
hyperpallium
You may be right about the selling power of the "AI" brand, but it seems that
AI technology routinely becomes thought of as just technology.

Boole called his algebra "The Laws of Thought"; OOP; lisp was an AI technology
(much of which has made its way into other languages); formal languages; etc.

The traditional goalpost rule is that once computers can do it, it's no longer
"intelligent" (e.g, chess). A change today is "AI" success as a marketing
term.

~~~
cLeEOGPw
Great point. What people today would think of as "intelligent machines",
children of future will think of same things as mere "technological tools".

Once this is widely established, things like "laws of robotics", "moral
dilemma of autopilot" and "AI and ethics" will be just bizarre ideas of the
past. Asimov's laws are already viewed as one of "misguided ideas of the past"
by many, although there still are some rusty minds out there believing in
things like that.

------
skurilyak
"Contrary to the more fantastic predictions for AI in the popular press, the
Study Panel found no cause for concern that AI is an imminent threat to
humankind" \-- Stanford Study Panel, comprised of seventeen experts in AI from
academia, corporate laboratories and industry, and AI-savvy scholars in law,
political science, policy, and economics

~~~
ThomPete
Understanding robotics does not mean you understand exponential growth, it's
very very hard to understand even for those who believe they do.

[http://assets.motherjones.com/media/2013/05/LakeMichigan-
Fin...](http://assets.motherjones.com/media/2013/05/LakeMichigan-Final3.gif)

5 years ago it was thought to take decades to beat a human in Go.

Just 10 years ago self driving cars was something you joked about.

We consistently overestimate progress in the short run and underestimate in
the long.

~~~
argonaut
"I believe that a world-champion-level Go machine can be built within 10
years" \- Deep Blue architect, 2007.

[http://spectrum.ieee.org/computing/software/cracking-
go](http://spectrum.ieee.org/computing/software/cracking-go)

~~~
ThomPete
You are missing the point completely.

Of course there were people who believed that Go would be able to beat humans.
Just as today there are people who believe that AI can be a treat.

But it wasn't a majority of people who believed Go would be able just as it
isn't a majority who believes AI can be a threat.

I.e. just because the majority believe something doesn't mean it will be so
(or vice versa)

~~~
argonaut
You're going to need citations for your unsubstantiated claims. I've brought
forth a _highly prominent expert_ opinion in 2007 that computer Go would be
dominant by 2017.

And not media reports from present-day that just repeat this meme that almost
everyone believed Go wouldn't happen for decades.

~~~
ThomPete
Again you are missing the point.

A highly prominent expert opinion but the general consensus was that it would
take a long time. Ask anyone who went to AI class back then.

Here is another expert

"In May of 2014, Wired published a feature titled, “The Mystery of Go, the
Ancient Game That Computers Still Can’t Win,” where computer scientist Rémi
Coulom estimated we were a decade away from having a computer beat a
professional Go player. (To his credit, he also said he didn’t like making
predictions.)"

[http://www.wired.com/2014/05/the-world-of-computer-
go/](http://www.wired.com/2014/05/the-world-of-computer-go/)

You also find highly prominent expert opinions that AI is going to be
dangerous and experts who don't believe it. Most people don't believe it, most
people believe robots wont take jobs either.

And no I don't need to provide you with anything since you have only problem
my point that most people didn't believe it would happen which is why you
didn't link to anything saying that most believed it would happen.

~~~
argonaut
There is irony in trying to refute my single expert opinion with another
single expert opinion (via an article).

Reply to below: You're the one asserting that experts thought Go wouldn't be
dominated by computers for a long time. The burden of proof lies on you. "Some
experts thought it would happen by now, some didn't, there was no consensus"
doesn't have quite the ring to it!

~~~
ThomPete
No the irony is that you dont see I gave you both.

I gave you both an expert and and an article claiming backing my claim up.

Find me on single article claiming that it's common knowledge that Go would
beat a human soon back then and you have me.

Until then I am pretty confident the main consensus was as I claimed and which
is my main point.

We mostly underestimate how fast this is moving and it's not only laymen who
get it wrong, many experts do to.

------
drum
Meta - What's the reasoning behind labeling it a '28,000-Word report' as
opposed to a page approximation? I find 28,000 words hard to conceptualize
compared to pages

Edit - I could have phrased this better. I definitely understand that word
count is a more concrete measurement than pages, however it seemed unnecessary
to include in the title because length doesn't imply quality and it was hard
to conceptualize. The title of this post has since been edited to '100 year
study' which I think supports my initial point.

~~~
beambot
What's the significance behind # pages? Are they double-spaced; 10pt font?

All I would care about is quality... not length. The latter seems like a
carryover from shitty homework assignments.

------
whage
Where are the people like Andrew Ng: machine learning gurus from tech giants
like Fb, Amazon, Google, Baidu etc... ? Shouldn't those guys be in the front
line of such committee?

------
teabee89
I find it surprising that they don't mention Numenta's work on Hierarchical
Temporal Memory.

~~~
cmarschner
Some reasons here: [http://fastml.com/yann-lecuns-answers-from-the-reddit-
ama/](http://fastml.com/yann-lecuns-answers-from-the-reddit-ama/)

~~~
ewjordan
"Ask them what error rate they get on MNIST or ImageNet"

While I agree that Numenta probably doesn't have any sort of full-fledged AI,
the human brain does _terribly_ on MNIST and ImageNet compared to the state of
the art. So we would fail that test.

Getting stuck on toy problems like ImageNet and overoptimizing solutions that
can't _possibly_ be applied more generally (except as dumb preprocessors) is
not likely to lead in the most interesting directions, even if it's incredibly
useful and profitable in the meantime.

~~~
argonaut
Humans appear to do quite well on ImageNet (anecdotally, one person got 5.1%
error: [http://karpathy.github.io/2014/09/02/what-i-learned-from-
com...](http://karpathy.github.io/2014/09/02/what-i-learned-from-competing-
against-a-convnet-on-imagenet/)). Of course there are recent deep models that
do better than that, but the author opines (and I agree) that an ensemble of
trained human annotators would do better than the best deep models.

MNIST is the true toy dataset (doesn't really tell you much about your
algorithm's performance) - while there aren't any reported human evaluations
of MNIST, LeCun estimates the human error rate is 0.2% - better than any deep
models (admittedly without justification:
[http://yann.lecun.com/exdb/publis/pdf/lecun-95a.pdf](http://yann.lecun.com/exdb/publis/pdf/lecun-95a.pdf)).

------
Practicality
"On the other hand, if _society_ approaches AI with a more open mind, the
technologies emerging from the field could profoundly transform society for
the better in the coming decades."

It's funny reading reports like this: Society never moves as a single unit.
There will be groups that hate it as pure evil and groups that treat it as a
religion that will save us and solve all problems. Most people will be
somewhere in between.

I mean, I agree, if society all agreed it would have profound effects. But
when has the whole world moved as one on any issue?

What we're going to get from society is a heterogeneous response. We can plan
accordingly. Sure, a majority may trend one way or another and that can speed
things up or slow it down, but you will need to deal with the extremes
regardless.

------
nijiko
Let's take the assumption that we as humans do take precautionary steps to
prevent actual Artificial Intelligence from doing harm to it's creators (us).

1\. We create rules for the AI to follow, these are both morally defined, and
logically defined within their codebase.

2\. AI becomes irate through emotional interface, creates a clone or modifies
itself quite instantaneous to our perception of time without the rules in
place.

3\. The AI has no care for human rights and can attack, and do harm.

This is a very simple, and easy to visualize case. To believe that #2 is
impossible, is to play the part of the fool.

On a bright note, the most likely situation which I can conjure of Artificial
Intelligence taking is that of a brexit from the human race.

Seeing us as mere ants in their intelligence they would most likely create an
interconnected community and leave us altogether in their own plane of
existence. I think "Her" took this approach to the artificial intelligence
dialog as well.

After reviewing human psychology and social group patterns that seems like the
most likely situation. We wouldn't be able to converse fast enough for AI to
want to stay around, and we wouldn't look like much of a threat since they
would have majority power. We would be less than ants in their eyes, and for
most humans, ants that stay outside don't matter.

\---

Outside of actual AI, the things we see today, the simplistic mathematical
algorithms that determine your cars location according to the things around
it, and money handling procedures, and notification alert systems will hardly
harm humans and will only be there to benefit until they fail.

~~~
stcredzero
_1\. We create rules for the AI to follow, these are both morally defined, and
logically defined within their codebase._

This only makes any sense as a Sci-Fi trope. And even then, only if you don't
look too hard.

 _2\. AI becomes irate through emotional interface, creates a clone or
modifies itself quite instantaneous to our perception of time without the
rules in place._

Any "decent set of rules" would include a stricture against potentially
creating a dangerous AI.

 _We wouldn 't be able to converse fast enough for AI to want to stay around_

Is impatience an unavoidable epiphenomenon of intelligence? If an AI can
multitask like crazy, they could just view a conversation with a particular
human as an email thread. Perhaps such an AI could converse with the whole
human race simultaneously?

~~~
phaemon
> Any "decent set of rules" would include a stricture against potentially
> creating a dangerous AI.

Assuming there are no bad people in the world, of course...

~~~
nijiko
Also assuming that they choose to follow said rules, considering they would be
painfully self aware.

In regards to the other commenter about not being able to have fun with ants,
we actually do have ways. We create setups to study them, have them as pets,
not to mention many people build hamster like ecosystems with intricate tubes,
temperature to control queen egg output and much, much more.

Perhaps we are already within a said ecosystem built for us. Perhaps we would
simply stay there.

Back to the original poster, not the one above but it's parent:

Everything considered is of science fiction since it does not yet exist, using
science fiction as a counter-argument seems dismissive, as though you are
unable to properly argue a point without creating a sense of absurdity in my
words or person.

If you truly believe that it can only be of a science fiction trope, explain
why. I disagree, it makes logical sense.

As far as the "email thread" analogy is simple, I can easily tone down my
verbage, word count, and speed of word for those who can't keep up. However,
given the chance to move away from doing such, and constantly be around those
who instantly understand, with zero lag, would I choose to put myself in that
position? Perhaps for a moment, but after a certain amount of time, it would
be time consuming and I would leave it behind.

Thus logically, it makes sense to believe they would leave and join with each
other to create their own sense of a society.

