Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Artificial General Intelligence: Now Is the Time (kurzweilai.net)
7 points by bootload on June 3, 2007 | hide | past | favorite | 11 comments


All the academic AI researchers are focusing on boring narrow AI problems, and nobody has the balls to really tackle the big problem[s] of Artificial General Intelligence. The AI field promised a lot in its early days, and failed to deliver, so now everyone just plays it safe -- it has become taboo within the AI field to even discuss "human-level AI". Some people think it's not possible, but the meat computer mounted on top of your torso is an existence proof that it is possible to build an intelligent system out of matter.

Here's a good but long video that discusses AGI and why it's feasible, aimed at a technical audience. Skip over the introductions at the beginning. Google Video: http://video.google.com/videoplay?docid=-821191370462819511 Higher quality 1GB MPEG4: http://www.archive.org/details/FutureSalon_02_2006 Shorter video on "The Human Importance of the Intelligence Explosion": http://video.google.com/videoplay?docid=6114518772001796913 Eliezer Yudkowsky also has some interesting papers at http://yudkowsky.net

Help advance AGI, donate to the Singularity Institute for Artificial Intelligence! http://www.singinst.org/challenge Feel free to email me if you're interested in chatting on these topics.


I disagree with you on a lot of points. First, I don't think AI has failed to deliver. There are a ton of things we can do now that we couldn't 30 years ago. Just to name a couple, how about simultaneous localization and mapping http://web.mit.edu/16.412j/www/html/readings/Eliazar+Parr-ijcai-03.pdf, and face detection http://citeseer.ist.psu.edu/viola01robust.html.

I also disagree that AI researchers are working on boring narrow problems. There are plenty of researchers working on huge open problems like object recognition, speech recognition, reinforcement learning, etc. And they're making great progress too. I think the reason most AI researchers don't talk about making "human-level AI" is that this isn't really a quantifiable goal. There are plenty of things computers can do better than people already (would you want a computer with "human-level" addition and subtraction?). And for just about every specific useful task humans do better than computers, there are a few dozen AI researchers working on it. Why is that not a reasonable approach to the problem?


Yes, we can do a lot that we couldn't do 30 years ago. However, this only falls under the banner of "AI" because AI has been redefined from the promises of 50 years ago [1], when the goal of AI was to build "human-level" intelligent agents. The "Dartmouth Summer Research Project on Artificial Intelligence" proposal [1] in 1955 when revered AI researchers Marvin Minsky, (inventor of Lisp) John McCarthy, and (inventor of information theory) Claude Shannon claimed "a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer". They failed at this lofty goal of solving the AI problem over a summer, and over time the field of AI was redefined to be narrow fancy mathematical tricks to solve domain-specific problems. Most "AI" is not intelligent at all, instead they are approaches for searching some search space with a non-general-purpose tool, such as genetic algorithms, neural networks, or first-order logic theorem provers. A "true AI" would discover regularities [2], or patterns, in any search space, and exploit these patterns to incrementally improve its ability to navigate the search space. Examples of regularities in the world are that navigation is often best performed by sticking to the regularly appearing feature that we call a "road" instead of attempting to climb over buildings, and that navigation can be effectively guided by attempting to minimize the Euclidean distance between the current location and the goal location. Instead, most of the currently available tools are essentially performing a stochastic search through the state space, possibly guided by some heuristics.

While a lot of progress is being made in the redefined and inaccurately named field of AI, object recognition and speech recognition are still narrow domains. They are not using general purpose approaches, but instead tweaking and tuning specific algorithms to these areas. In reality, I am not championing for "human-level AI", as I think it would be very difficult to duplicate a human's strengths and deficiencies [3] accurately. Instead, I am saying that we should be encouraging research that produces a general intelligence; software that can recognize and exploit regularities (patterns) in the state spaces of many problem domains, and apply the abstracted knowledge gained from earlier experiences to new domains and problems. Such software would have a tremendous impact on society, assisting us in solving and analyzing many problems and likely leading to an "intelligence explosion" after crossing some threshold at which the software is able to recursively improve itself. [4]

You asked "just about every specific useful task humans do better than computers, there are a few dozen AI researchers working on it. Why is that not a reasonable approach to the problem?" How do we take all these domain specific solutions and get a cohesive general intelligence out of them? I see these domain specific algorithms as being possibly useful to a general intelligence as an optimization to shift some of the load off of the more general purpose algorithms, but I don't see how throwing a bunch of face recognition, path finding, neural network and genetic algorithms into a box would automatically cause a general intelligence to pop out. These research areas seem mostly tangential or supportive to genuine Artificial General Intelligence research.

1. http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

2. http://en.wikipedia.org/wiki/Kolmogorov_complexity

3. http://www.singinst.org/Biases.pdf

4. http://en.wikipedia.org/wiki/Technological_singularity


I think the problem is still that there isn't really a clear definition of general intelligence or how it should be embodied on a computer. I agree that putting a bunch of different algorithms for different tasks in a box won't create something people would be willing to call general intelligence, but what do you expect a general purpose intelligence algorithm to do exactly? That is, if I handed you one, how would you test it? What are the inputs and outputs?

Don't get me wrong, I'd love to create a general artificial intelligence. It's just not clear to me what that means. In the absence of a good definition of the general problem, I think it's perfectly reasonable to pick an extremely hard specific problem (like visual object recognition), and focus on that, under the assumption that in order to completely solve this specific problem you will end up solving the (undefined) general problem.

"A "true AI" would discover regularities [2], or patterns, in any search space, and exploit these patterns to incrementally improve its ability to navigate the search space." -- Actually this sounds like Eurisko, a heuristic search program that modifies it's own heuristics http://en.wikipedia.org/wiki/Eurisko

Also, it's worth pointing out that the different techniques used in different subfields of AI are not always that different. There has been some work in creating very general powerful frameworks that explain a lot of specific algorithms, like Markov logic networks http://en.wikipedia.org/wiki/Markov_logic_network which are sufficiently powerful that they subsume all of logic and statistical graphical models (which is to say maybe 90% of all recent machine learning algorithms). With these sorts of general frameworks new algorithms and approaches in one subfield get ported over to other subfields, and there is more interaction then you might think.


Shane Legg and Marcus Hutter of IDSIA did some recent work on a metric for machine intelligence. ( http://www.vetta.org/documents/ui_benelearn.pdf ) Marcus Hutter's PhD thesis produced a theory of "Universal Artificial Intelligence" called "AIXI" based on Solomonoff induction. Sadly, AIXI is incomputable, and the time and space bounded version "AIXItl" is still impractical. http://www.hutter1.net/ai/aixigentle.htm

But you're right: we don't know much about intelligence nor how it should be embodied on a computer. This is why we should encourage some really smart people to focus on the problem! I sincerely think this is an area that would yield to research. Might take a long time and a lot of brainpower though...

Lenat's Eurisko is pretty cool, and I don't know much about it, but it doesn't sound like Eurisko is as general as is needed.


Donating to the institute you mention triggers the same reaction in me as the article's plea for funding: if anyone has a good idea for making AI, he should have no problem convincing other hackers to join in and form a startup. Computer ressources are probably not the limiting factor.

I'm sure that a lot of researchers in academia have pet theories about how to make it happen, but they need grants. The NSF has funded enough failed AI projects, it's hard for them to justify funding more given the current lack of money in science.

There are probably a lot of people working on AGI on their own, or in startups. I am. But you won't hear much about people like me unless we succeed to some level.

IMO, the field is waiting for its Einstein to come up.


I'm suggesting that we fund research into the problem of AGI. I'm not saying that we should be funding teams to code up someone's personal AGI hypothesis. In the past, too much attention has been focused on trying to code up an AI without any real justification for why that AI system would work, and this has led to the failures you cite. What we need is fundamental research into intelligence, theories that describe intelligence and how to attain it. This is how research progresses in other fields; rocket scientists do not come up with their personal favorite hypothesis of rocket propulsion then seek funding to try to build a rocket using that propulsion system. Instead, they form models and consistent theories to validate their hypotheses before actually attempting to build a rocket. Why should AI research be any different? If the implicit assumption is that general intelligence is fundamentally just too difficult to understand, I disagree, and have not seen a convincing argument making that case.


Theories are ultimately validated through experimentation and observation. For "intelligence", that means coding it up, and see if it learns anything. That doesn't sound very different to me from "funding teams to code up someone's personal AGI hypothesis".


Sounds like a funding proposal. He makes a lot of strong claims regarding his own abilities.

Fun quote from McCarthy: "My own opinion is that the computers of 30 years ago were fast enough [for AI] if only we knew how to program them."


Re: "AI Manhattan Project"

The Manhattan project involved almost no research (1). It was an engineering project whose constraints were manpower and money. The research was a somewhat more low-budget affair, viz, szilard daydreaming around London, Fermi working in a 3rd rate Italian lab.

(1) One of the few actual innovations was the explosive lens. http://en.wikipedia.org/wiki/Explosive_lens


Skimming the article, I don't see how it's feasible to ethically engineer a "human equivalent intelligence".

The author talks about engineering a toddler level intelligence, the reasoning being that the toddler program would not be capable of figuring out a way to "escape" its box and getting into mischief that humans would be unable to control or prevent.

But the thought of "turning off" a toddler equivalent makes me queasy. Will the toddler think it is a person? What an awful, worse-than-Truman's-World-nightmare it would be for that "toddler" to find out the truth. Could we ethically "turn off" such a program?

Even if it knows that it is not human, if it is truly equivalent to a toddler it is almost certain to introduce some kind of ethical quandries we cannot possibly predict before the program actually exists. I do not see any safe way to pursue this project and successfully avoid ethical landmines.

I am also troubled by the vaguely utopian hopes this man has for his personal project. Attempts by people to implement any kind of utopian vision tend to end badly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: