
The end of AI winter? - jacquesm
http://machineslikeus.com/the-end-of-AI-winter.html
======
WilliamLP
> machine translation, data mining, industrial robotics, logistics, speech
> recognition, banking software, medical diagnosis and Google's search engine,
> to name a few.

Do these things really directly descend from pure AI reasearch? Or are they
really the result of a bunch of clever, yet extremely specialized algorithms,
independently developed, combined with an incredible increase in hardware
power? Not to say such things aren't incredible (Google clearly has changed
the world), but is it clear that pure AI research has paved the way?

From an outside perspective it seems the "God of the gaps" argument - AI is
what AI researchers haven't done yet - is used as a smokescreen to cover up
that AI research hasn't really done much in the last 30 years.
(Counterexamples without hand-waving?) And not only that, but it consistently,
wildly, and systematically makes incredible predictions that don't come true.
For example, Kurzweil is clearly a genius, but also wildly deluded and wrong
about his time-frames.

Some of you guys will down-mod me for saying this, and you're the same people
who also won't admit my side was correct when, in 20 years, automated
translation tools still suck and you'll still be making the same old arguments
about how AI is what we haven't done yet... (but in ten years you'll be able
to upload your brain!)

~~~
lliiffee
> From an outside perspective it seems the "God of the gaps" argument - AI is
> what AI researchers haven't done yet - is used as a smokescreen to cover up
> that AI research hasn't really done much in the last 30 years.

Here is what you don't understand: There are very, very, very few people who
do research on "AI". People research medical diagnosis, or search, or data
mining, or speech recognition, or vision, or chess. If you go to a conference
on AI, these are the people you will meet. If you look at who the NSF is
funding with their "robust intelligence" area, these are the people you will
find. You can only say that "AI research hasn't done much in the last 30
years" if you also say "no one has worked on AI in the last 30 years".

update: if you want to see what people in the mainstream (Kurzweil is not
mainstream!) are actually working on, try browsing the IJCAI proceedings:

<http://ijcai.org/papers09/contents.php>

~~~
WilliamLP
The trouble with that argument is that if you'd have asked anyone (anyone!),
in any AI related field in 1980, where AI would be in thirty years what would
they have said?

Surely not "Well, for instance one of the greatest achievements might be that
we will work on chess algorithms, and we will see some incremental
improvements resulting from tweaking certain heuristics and more intelligent
pruning through more specialized algorithms and hardcoded chess knowledge. The
programs still won't be able to learn in any interesting sense, but with the
help of several orders of magnitude of hardware speed, they will be 200 ELO
stronger than the best human!"

~~~
Tichy
The DARPA car racing thing seems to be a breakthrough. Granted, 30 years ago
people would have expected more. But it is more than just a chess algorithm.

~~~
borism
DARPA Grand Challenge? How's that a breakthrough in AI? Sure, it's uses a lot
of results of "AI" research, but there's nothing revolutionary about that.

~~~
Tichy
My understanding is that they made a big leap during the challenge - from
catastrophic performance in the first run, to several successful drivers in
the second run.

Revolutionary or not, I don't think it was trivial to make an autonomous
vehicle.

------
enki
The problem with AI is that people try to call simple heuristics and learning
algorithms AI, while what we're actually seeing is an overglorified Eliza.

The term Intelligence sets the expectation of "universal learning", not just
solving problems we previously thought to be hard. And the research necessary
to accomplish that, probably isn't even in the same direction as these fraud
AI algorithms. The Biological Computer Laboratory (which died because AI took
all the funding) under Foerster probably had a better shot at solving these
problems than the AI Lab under Minsky ever had.

This overselling soaked up the funding with empty promises and killed more
basic longterm research. Lets hope serious researchers find a way to get their
research funded again, despite the AI shills.

~~~
Confusion
It isn't actually clear whether intelligence is anything other than some
emergent phenomenon of suitable connected 'simple heuristics' and 'learning
algorithms'. If you as me, _we_ also _are_ overglorified Eliza's.

~~~
tomjen2
There is one big difference: we can reason about reason (meta reason), Eliza
can't.

It is not a difference in degree, it is a difference in kind and seems to be
unique to higher primates (you can learn a dog to do tricks, but it won't
spend its time coming up with new tricks), one of the most interesting science
videos I have ever seen is one where a bonobo learns to write a symbol that
had been on a computer it used to express its emotions.

~~~
amix
Your statement about dogs is untrue. Dogs, and a lot of other animals, have
the ability to think creatively, i.e. to come up with new stuff, new tricks. A
lot of animals also have self-awareness* (e.g. dolphins, elephants, magpies).
Of course, we as humans excel at the task of thinking, since it's one of our
greatest advantages, but the task of thinking and being self-aware is found in
other animals, so there must be a way of reproducing this.

What spawns creative thinking and self-awareness is of course a mystery, but I
think it's a consequence of creating a complex enough system and creating a
system that's built on ordered chaos.

* [http://en.wikipedia.org/wiki/Self-awareness#Self-awareness_i...](http://en.wikipedia.org/wiki/Self-awareness#Self-awareness_in_animals)

------
JabavuAdams
Why care about this faddishness?

Anyone with a desktop computer (or pen and paper, for that matter) can do
cutting-edge research in GAI. So it's not popular with the VC's and military-
industrial-corporate-welfare-system.

The point is if you're interested, just do the research. You don't need $$$.
If you want $$$, over-promise about crappy little web apps, and get funding
that way.

------
ct4ul4u
I am confident that ongoing over-promising will make AI winter a continuing
reality.

~~~
catzaa
> will make AI winter a continuing reality.

This will be a good thing. Theoretical development thrived under the AI
“winter”.

I am going to throw the rooster in the hen house and say it: It seems that a
large part of AI is thinking up new functions with parameters to be tuned.
These parameters are called something exotic and words “neural” and “network”
is used liberally.

~~~
jimbokun
"It seems that a large part of AI is thinking up new functions with parameters
to be tuned."

Doesn't that encompass all of Computer Science?

------
hegemonicon
The problem with AI development seems to be the assumption that it's going to
be a useful business tool. This is sort of like assuming that an artificial
organism created in the lab will make a good secretary.

~~~
lux
I think another problem is the expectation of what AI will look like at all.
We think of intelligent human-like robots like something from science fiction,
when in reality AI could just be a bit of code running on a standard machine
with specific and likely specialized inputs and outputs.

Think of an AI machine working with atmospheric data as one "sense" combined
with seismic data and some others, with the directed goal of predicting
certain types of disasters (tsunamis...?).

Free will and emotions are other assumption we would likely not give these
machines, so the worry of self-interest may not exist either, which would aid
in making it good at something useful for us.

~~~
randallsquared
I think the best defintion of "free will" basically boils down to
"unpredictable in detail absent simulation". This sounds weird at first, but
the results probably match your intuitive understanding of free will. Given
that, I think that any general intelligence approaching human level will have
free will.

That doesn't mean that it won't have certain goals, though it remains to be
seen whether it will be possible to design a clean goal system with a top-
level goal (see also "Friendliness"). Humans clearly do not have this kind of
goal system.

~~~
eru
Would a good Pseudo-Random number generator satisfy your definition?

(By the way, humans, when asked directly for random numbers, are terrible at
the task.)

~~~
randallsquared
Well, it satisfies the "free" part, but probably not the "will" part, which
implies reasons for actions. I'm not actually sure humans have free will under
my definition, but I think we probably do. It could be that some algorithm
that's much simpler than an actual human could predict in detail what a given
human will do without simulation, in which case I'd be forced to admit that
humans don't have free will by my definition.

------
lacker
We do have search engines, which are pretty sweet and vaguely related to
artificial intelligence. The future of AI will probably be similar - we'll get
more cool stuff, but it will not be Data from Star Trek, it will be something
we didn't expect.

~~~
jacquesm
I don't see search engines as being 'vaguely related to artificial
intelligence' at all.

~~~
dhs
Apparently Google's Peter Norvig does, e.g.
[http://video.google.de/videoplay?docid=-6754621605046052935&...](http://video.google.de/videoplay?docid=-6754621605046052935&hl=de#)
(he starts talking about the relationship between Google and AI at 06:40).

~~~
runningdogx
I don't think Norvig thinks differently; he's just using the term "AI"
differently. "AI" in the context of the OP is "strong AI" or AGI (artificial
general intelligence). Dr. Norvig takes the term "AI" to also encompass
"machine learning" in the sense of developing algorithms that learn within
limited problem domains. When Norvig means AGI, he says it explicitly (see the
video at around 7:30 to 8:30), and he also says specifically that Google is
not interested in general intelligence.

------
kansando
It seems to me that the core dilemma in the AI community has always been - is
intelligence a "systems" problem or is it a "general" problem. It seems people
first tried a series of general approaches and did not make much progress. Now
that people are taking a series of bottom-up approaches in individual domains
they are making a lot more progress. So maybe the AI winter is still on for
the "general" approach but the thaw is well on its way for the "systems"
approach.

------
Maro
"DARPA has also supported programs on the Semantic Web with a great deal of
emphasis on intelligent management of content and automated understanding.
However James Hendler who was the manager of the DARPA program at the time has
expressed some disappointment with the outcome of the programe."

The Semantic Web is not a good use-case in terms of reputation for modern AI..

------
jsm386
_Wintermute_?

------
tybris
No. Two words: Semantic Web.

------
Novash
Can't seem to see this at work. Saving to see later at home.

