
What should we learn from past AI forecasts? - ghosh
http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts
======
argonaut
What I find interesting is that Hacker News manages to hold the same mentality
that there's a startup bubble, _this time it 's not different_, while at the
same time holding the mentality that AI is going to take over, _this time it
's different_ (in aggregate, I'm not saying everyone holds this view, this
seems to be the plurality view).

Many of the same arguments used to argue how AI is different this time around
are the exact same arguments you can use to justify that it's not a startup
bubble this time.

My guess is something about the fantastical sci-fi aspect of AI captures the
deep-seated imagination and wishful-thinking of many engineers, whereas the
rush of money, business-people, and focus on sales/growth aspect of startups
draws out deep-seated disdain from many engineers.

~~~
tim333
The reason HN probably thinks that with the startup bubble, this time it's not
different, while with AI, this time it's different is because that's what the
facts indicate and HN tends to be well informed.

The bubble is same old and it's debateable whether it even is a proper bubble
or just some frothy valuations. Such periods of overvaluation have happened
many times and will come and go.

The AI thing is quite different. If you accept the human brain is effectively
a biological computer and that normal computers get more powerful every year
then inevitably there's a point when they get more powerful (in terms of
processing power and memory) than the biological ones. This is happening over
the next twenty years roughly. No one said that is happening before. It's a
one off inevitable event that will happen once and only once in human history,
of a significance comparable to the evolution of multicellular life forms say.
It should be interesting to see.

~~~
eli_gottlieb
>If you accept the human brain is effectively a biological computer and that
normal computers get more powerful every year then inevitably there's a point
when they get more powerful (in terms of processing power and memory) than the
biological ones.

That doesn't give you anything like the right algorithms to run on the
computers. MOore's Law has been ending, so that free ride to avoid taking
asymptotic complexity seriously is over.

------
zxcvvcxz
No! Be quiet and let me raise money via overfitting my datasets and making
short demo vids!

More seriously though, even if there's a lot of undeserved hype (I don't think
so, I just think non-technical people are making up foolish expectations), all
this work and investment into technical development of statistical machine
learning algorithms and software is going to pay off. Much in the same way
that overinvestment in internet infrastructure during the dot-com bubble paid
off.

~~~
studentrob
> undeserved hype (I don't think so, I just think non-technical people are
> making up foolish expectations),

That's what undeserved hype is and where it comes from..

> Much in the same way that overinvestment in internet infrastructure during
> the dot-com bubble paid off.

Sure. The longer the hype can be tempered with articles like this, the more
reasonable the investments, the bigger the pay off.

------
Animats
The difference this time is that 1) AI is profitable, and 2) everything is on
a much larger scale.

As I've mentioned before, I went through Stanford CS in 1985, just as it was
becoming clear that Expert Systems were not going to yield Strong AI Real Soon
Now. Back then, AI research was about 20 people at Stanford, and similar sized
departments at MIT and CMU, with a few other small academic groups and some
people at SRI. There were a few small AI startups including Teknowledge and
Denning Robotics; few if any survived. Everybody was operating on a very small
scale, and nobody was shipping a useful product. Most of this was funded by
the US DoD.

Now, machine learning, etc., is huge. There are hundreds of companies and
hundreds of thousands of people in the field. Billions are being spent.
Companies are shipping products and making money. This makes the field self-
sustaining and keeps it moving forward. With all those people thinking,
there's forward progress.

Also, we now have enough compute power to get something done. Kurtzweil claims
a human brain needs about 20 petaFLOPS. That's about one aisle in a data
center today.

~~~
YeGoblynQueenne
>> Also, we now have enough compute power to get something done. Kurtzweil
claims a human brain needs about 20 petaFLOPS. That's about one aisle in a
data center today.

It's a bit late over here so what I want to say may not come out as I mean it,
but, basically, who cares how much computing power we have? Most people are
not trying to simulate the human brain anymore. And even if Kurzwail's maths
were good (I doubt it - the whole idea of measuring anything about brains in
bytes sounds just silly) it would just mean that all the computing power in
the world is no good if you don't know how to connect the dots.

And we don't. Lots of computing power is good for machine learning, obviously,
but strong AI is never going to happen like that, by brute-forcing crude
approximations from huge data sets while pushing our hardware to its limits.
That's a dumb way to make progress. Basic computer science says if you want to
go anywhere with a problem, you need to solve it in linear time at the very
worse. Look at what we have instead: machine learning algorithms' complexities
start at O(n²) and go up. We just don't have good algorithms, we don't have
good solutions, we don't have good anything.

We have "more powerful computers" today. So what. Today we solve the stuff
that's possible to solve, the low-hanging fruit of problems. Tomorrow, the
next problem we try to tackle is ten times bigger, a hundred times harder, we
need machines a thousand times more powerful... and it takes us another fifty
years to make progress.

So, no, it doesn't matter how powerful our computers are. We can't cheat our
way around the hard problems. Because they're just that: hard. We're just
gonna have to be a lot smarter than what we are right now.

~~~
visarga
> Today we solve the stuff that's possible to solve, the low-hanging fruit of
> problems. Tomorrow, the next problem we try to tackle is ten times bigger, a
> hundred times harder, we need machines a thousand times more powerful...

Nah, if you can't handle the huge amount of data, it's possible to just switch
to a sparse model or do MC-like sampling. Take AlphaGo as an example for that
- a huge state space, yet it was tractable to beat the human expert. That way
the network doesn't scale linearly with the size of the domain being learned.

These kinds of solutions don't rely on improving the hardware. What is needed
is datasets, competitions and grants to get people to try and improve state-
of-the-art results on them. It's been demonstrated that when a good benchmark
appears, a lot of papers follow and top results improve massively. Another
useful component would be simulators for training agents in RL.

A promising direction is extending neural networks with memory and attention,
in order to focus its work more efficiently and to access external knowledge
bases. As we improve on these knowledge bases and ontologies, all we have to
learn is how to operate on them.

Thus, improvements can come in various ways, by sampling, sparsity, external
knowledge bases, better research frameworks, and improving the hardware (such
as having a better GPU card or a dedicated device) is just one factor.

~~~
YeGoblynQueenne
>> Nah, if you can't handle the huge amount of data, it's possible to just
switch to a sparse model or do MC-like sampling... That way the network
doesn't scale linearly with the size of the domain being learned.

That's useful when your domain is finite, like in your example, Go. If you're
dealing with a non-finite domain, like language, MC won't save you. When you
sample from a huge domain, you eventually get something manageable. When you
sample from an infinite domain - you get back an infinite domain.

That's why approximating infinite processes is hard: because you can only
approximate infinity with itself. And all the computing power in the world
will not save you.

>> It's been demonstrated that when a good benchmark appears, a lot of papers
follow and top results improve massively.

Mnyeah, I don't know about that. It's useful to have a motivator but on the
other hand the competitions become self-fulfilling prophecies, the datasets
come with biases that the real world has no obligation to abide by and the
competitors tend to optimise for beating the competition rather than solving
the problem per se.

So you read about near-perfect results on a staple dataset, so good that it's
meaningless to improve on them - 98.6% or something. Then you wait and wait to
see the same results in everyday use, but when the systems are deployed in the
real world their performance goes way down, so you have a system that got 99
ish in the staple dataset but 60 ish in production, as many others did before
it. What have we gained, in practice? We learned how to beat a competition.
That's just a waste of time.

And it's even worse because it distracts everyone, just like you say: the
press, researchers, grant money...

Well, OK, I'm not saying the competitions are a waste of time, as such. But
overfitting to them is a big problem in practice.

>> A promising direction is extending neural networks with memory and
attention

That's what I'm talking about, isn't it? Just raw computing power won't do
anything. We need to get smarter. So I'm not disagreeing with you, I'm
disagreeing with the tendency to throw a bunch of data at a bunch of GPUs and
say we've made progress because the whole thing runs faster. You may run
faster on a bike, but you won't outrun a horse.

(Oh dear, now someone's gonna point me to a video of a man on a bike
outrunning a horse. Fine, internets. You win).

------
krylon
One of my favorite comic artists put it more appropriately than I could - in
the form of a flow chart: [http://www.smbc-
comics.com/index.php?id=4122](http://www.smbc-comics.com/index.php?id=4122)

TL;DR - once strong AI arrives, we're as good as dead. ;-) Can't argue with a
flow chart!

------
2sk21
I remember reading the famous book by McCorduck and Feigenbaum about the
Japanese fifth generation project in the early 1980s. That book had a strong
influence on my career and I recall that I resolved to become an AI expert. As
it happened, it turned out that the Japanese fifth generation project achieved
pretty much nothing.

~~~
mcguire
I think it was Bruce Porter in an AI class at UT Austin who described the
fifth generation project as "we need to do machine translation and planning
and...so therefore we will build a Prolog machine."

~~~
DonaldFisk
Some of the decisions made were considered strange even at the time: the
choice of Prolog instead of Lisp seemed to be largely down to it not being
American. The main hardware they wanted to develop was dataflow, for which
Prolog isn't a particularly good match.

------
aburan28
Our bar for achieving true Artificial Intelligence is set way to high in my
opinion. This talk from CCC 2015 is very relevant
[https://www.youtube.com/watch?v=DATQKB7656E](https://www.youtube.com/watch?v=DATQKB7656E)

~~~
studentrob
lol, yes it is. Can't believe you're being downvoted on HN. Just shows how
hyped it is. People can't handle disagreement.

This video looks good to me from the intro. I'm going to check it out. Thanks!

------
lawpoop
"That we don't know what the hell we're talking about."

