
How the artificial intelligence revolution was born in a Vancouver hotel - yulunli
http://business.financialpost.com/fp-tech-desk/how-the-artificial-intelligence-revolution-was-born-in-a-vancouver-hotel?__lsa=92b6-01ad
======
dalke
The history seems drastically short-sighted, such that what it's talking about
makes no sense to me. Could someone explain what this is supposed to be about?

Here are my issues:

> They believed it was possible to teach a machine to learn the same way a
> child does, through artificial neural networks that mimic the function of
> the human brain. In the process of teaching a machine to learn like a human,
> they figured there was likely a lot to discover about how humans learn as
> well.

That's been part of AI thought for a very long time. In 1950 Alan Turing wrote
(quoting from
[http://loebner.net/Prizef/TuringArticle.html](http://loebner.net/Prizef/TuringArticle.html)
):

> Instead of trying to produce a programme to simulate the adult mind, why not
> rather try to produce one which simulates the child's?

or the work put into Cyc (from [http://www.businessinsider.com/cycorp-
ai-2014-7?op=1&IR=T](http://www.businessinsider.com/cycorp-
ai-2014-7?op=1&IR=T) ):

> Cycorp's product, Cyc, isn't "programmed" in the conventional sense. It's
> much more accurate to say it's being "taught." Lenat told us that most
> people think of computer programs as "procedural, [like] a flowchart," but
> building Cyc is "much more like educating a child."

This is part of the constructivist approach to AI. See
[http://www.aisb.org.uk/convention/aisb08/proc/proceedings/12...](http://www.aisb.org.uk/convention/aisb08/proc/proceedings/12%20Computing%20and%20Philosophy/04.pdf)
for a quickly found example of a constructivist vs. non-constructivist
approach with respect to AI.

Nor is the idea of using neural nets for this anything new. Indeed, neural
nets and AI date from the 1950s.

Thus, the next paragraph makes little sense:

> The consensus among most computer scientists at the time was that this was
> nuts. The way to get a computer to do something was to program it to do it,
> not ask it to learn the task itself. If he had been a computer scientist,
> Silverman probably would have thanked them for their time and moved on.

I don't know what "nuts" means here. Was the consensus of most computer
scientists that the constructivist approach wouldn't work? I know that
subsumption architecture and genetic algorithms were a hot topic 20 years ago,
so it doesn't makes sense that people thought that 'program it' was the only
effective solution.

> Silverman convinced CIFAR to give that band of self-identified weirdoes
> about $10 million over 10 years, making it pretty much the only organization
> at the time to back the research of artificial neural networks.

I have no idea what this means. The article suggests a timeframe starting in
2004. I remember people doing back-prop in the early 1990s. I did some
consulting work integrating an ANN prediction system around 2005. A Google
Scholar search (
[https://scholar.google.com/scholar?hl=sv&q=artificial+neural...](https://scholar.google.com/scholar?hl=sv&q=artificial+neural+net&btnG=)
) shows plenty of diverse research in ANNs before 2000 .

