

When will machines outthink us? - schtog
http://www.msnbc.msn.com/id/20676037/

======
mlinsey
Strong AI (the creation of a general intelligence capable of learning new
things in the way humans can, as opposed to weak AI which includes programming
machines to do specific tasks that are usually considered to require
intelligence, such as playing chess or flying a helicopter) is about 20 years
away...and it has been about 20 years away ever since the 1950's.

The traditional approaches to build a strong AI all involved forms of explicit
knowledge representation: where facts about the world would be stored in
memory and then pieced together to come to new (hopefully intelligent)
conclusions and plan actions. Most people quickly realized that this approach
doesn't scale at all: most knowledge that humans have about the world is very
context-dependent and fuzzy. The new approaches around machine learning and
probabilistic modeling are more promising but almost all of them involve
building mathematical models around a specific problem rather than building a
general learning machine.

Most leading AI researchers in academia aren't even trying to build a general-
purpose intelligence anymore, instead focusing on solving domain-specific
problems, and the field of AI has been much more successful over the last
fifteen years or so as a result.

~~~
sown
So do you think we are closer to getting to AI than we were in the past or
closer? Certainly, we have better understanding of cognition today. Modern
understanding of the mind like we have now didn't really exists in decades
previous along with better imaging technologies. Is there some reason why we
can't build a general learning machine?

