

Artificial-intelligence research revives its old ambitions - singhketan7
http://web.mit.edu/newsoffice/2013/center-for-brains-minds-and-machines-0909.html
In my opinion, hard AI could be just around the corner. Wonder what kinds of startups would rise up  to leverage such a technology that is so natively different.
======
hharrison
We are far, far from hard AI. If anything, this article shows that we're only
just now starting to ask the right questions. And that's even debatable. Plus
they're very hard questions.

The problem is we have no theory of intelligence, no theory of psychology.
Research in the cognitive fields is fractured, all about tiny insignificant
phenomena with little relation to anything else. Our best theory is "the brain
is like a computer" which is, frankly, a terrible theory.

Here's something I find more promising: On Intelligence From First Principles:
Guidelines for Inquiry Into the Hypothesis of Physical Intelligence [1]

In short, what we really need to understand is self-organization and non-
equilibrium thermodynamics. Not image labeling.

[1]
[http://www.tandfonline.com/doi/pdf/10.1080/10407413.2012.645...](http://www.tandfonline.com/doi/pdf/10.1080/10407413.2012.645757)

~~~
chongli
_The problem is we have no theory of intelligence_

I don't think we can have a theory of intelligence. At least in the public
consciousness, intelligence is one of those "God-of-the-gaps" style concepts
that continually evolves in order to maintain the illusion of human
superiority.

~~~
Houshalter
Intelligence certainly exists. There is a reason humans are building space
ships and chimpanzees are playing around with sticks.

~~~
chongli
Right. But how do you define it _intensionally_? Saying that "humans have a
lot of it", "chimpanzees have less of it", "ditto for dolphins", "reptiles
have very little of it", etc. is defining intelligence _extensionally_. Why is
this a problem? Because an extensional definition doesn't tell you how to add
new elements to the set.

~~~
Houshalter
It's not just arbitrary examples, there is reasoning behind it. You can look
at humans building spaceships and using tools and demonstrating understanding
of abstract concepts. There are a number of tests you could do that would
confirm something is intelligent like looking for any of those things.

Some rough and imperfect, but still useful, definitions of intelligence could
be the ability to make good predictions based on past data, the ability to
solve optimization problems well, and learning ability.

~~~
chongli
_demonstrating understanding of abstract concepts._

What sort of test can show that a subject demonstrates an understanding of
abstract concepts?

So far, from what I've seen, if a test can be written then software can be
written to solve the test.

~~~
Houshalter
You could talk to it or you could have it solve a difficult problem.

~~~
chongli
Right. That's commonly called the Turing test. This just pushes back the
problem of defining intelligence to one of creating a _proper_ Turing test.
How do we do that?

~~~
Houshalter
The Turing test is actually a pretty decent and straightforward test.

I don't understand why this is an issue though. Testing intelligence was never
the hard part of AI. There are so many tasks that computers currently suck at
that we would be happy if they were solved, regardless what label you gave the
solution. And I don't think many people could see a computer doing tasks like
having conversations or solving difficult problems and deny that it is
intelligence. Even if there is no formal test to perform that is 100% certain.

~~~
chongli
_The Turing test is actually a pretty decent and straightforward test._

How so? To me, it appears completely open-ended. You could sit there forever
asking questions and never reach a definitive result.

~~~
Houshalter
I don't see how. Have you ever _tried_ talking to a chatbot? It becomes
apparent pretty quickly that it's not intelligent.

------
dave_sullivan
I spend a fair amount of time thinking about this stuff...

Currently, we've got machine learning--I think there's a market for it, but
it's early. I think the near term is about automating data science tasks--we
won't have a shortage of data scientists because we can automate much of what
they do. Even that's a hard problem. Personally, I think deep learning
(specifically relatively recent neural net related research) will play an
important part in this--as far as automation goes, it will help automate
feature engineering (currently a huge cost).

In general, I agree with the sentiment in the comments that "we are a long way
from AI". Objectively, we are. However, I think we were similarly a long way
from flight when the Wright brothers figured out how to keep a plane in the
air. Or towards the end of the human genome project--although at least in that
case, we knew what we were doing, whereas AI still doesn't really have a good
theory behind it. The first software that seems awfully "AI like" though--I'm
thinking that it could very well appear by accident while working on more
general machine learning tasks. I don't think we're as far off as most people
think, but I certainly don't have a crystal ball (and neither does anyone
else)

~~~
VLM
Maybe in your second paragraph you're discussing science vs engineering? For
example from a science perspective the Wright Bros got to check mark "flight"
pretty decisively, mostly because of their windtunnel work, although the
engineering has mostly failed for most of the species and most of the species
has never flown and due to massive resource constraints probably never will,
so defining "flight" as a success is iffy.

The genome project is another science to engineering transition problem, where
the science check mark is done, but as far as I know there are no widespread
engineering applications for the data at all, although someday there might be
in the generic sense that all research is potentially useful.

Something like this could happen in AI. OK the science proves conclusively
that it could work, its just human engineering might not be up to it for a few
centuries (like the Babbage experience WRT computation) or even the
engineering is possible but the economics and politics make it impossible
(like building a massive power generation dam across the straits of Gibraltar)
We could prove the science of AI works, and then nothing happens for the next
500 years. Or maybe not.

------
atpaino
It's funny to see this on HN, because I currently have a copy of one of Dr.
Poggio's papers in front of me. I've got to say, from what I've read so far
I've gotten more excited by his group's approach than any other's I've seen.
If you are mathematically inclined and interested in AI, I highly recommend
reading through some of his group's papers, which are available on their
site[1]. For those of you commenting that we don't have any theory on how the
brain does its magic, this new initiative should excite you because this group
is actually approaching the problem from a far more rigorous theoretical
viewpoint than before. Specifically, their theory of neurons storing both
sample images _and_ common transformations on them fits rather elegantly with
what we know about the brain, and seems like a promising route to matching
humans' ability to "learn" a new object from just one instance of it.

[1]: [http://cbcl.mit.edu/publications/index-
pubs.html](http://cbcl.mit.edu/publications/index-pubs.html)

------
AndrewKemendo
I think this should be re-stated that a group of respected researchers have
revived Artificial Intelligence's original ambitions.

In fact, there have been a handful of folks thinking and working on Strong
AI/Artificial General Intelligence steadily for years. Ben Goertzel even
started the conference [1] and journal [2] in 2008.

One of the biggest things I see in every one of these conversations and
threads is everyone assuming humans are actually good at things, as opposed to
just marginally better than most other machines or humans. So people's
standards for machine intelligence are way above what they would expect from
human standards.

Google translate is a perfect example. People were complaining about it here
on HN the other day because it wasn't perfect, however in comparison to any
group of average professional linguists (Some of whom I work with daily)
Google Translate and some of the other machine translation services' accuracy
for it's speed is light years ahead.

People also forget that "Strong AI" and "AI complete" is always a moving
target for benchmarking - and there is no benchmarking standard (No, a turing
test isn't a robust enough test for an AGI). I posit that humans will never
truly accept an AI as smarter than us until it dominates us. It will always be
- well yes it can X, but is it really thinking? Does it have consciousness?
Great questions philosophically, but practically it really doesn't matter.

This is good though, I think if there is anything the human race should be
working on it's this. Everything, and I mean EVERYTHING pales in comparison in
my opinion.

[1] [http://www.agi-conference.org/2013/](http://www.agi-conference.org/2013/)
[2]
[http://www.degruyter.com/view/j/jagi](http://www.degruyter.com/view/j/jagi)

------
ramanan
The goals of the new research center seem close to what Douglas Hofstadter
(author of GEB) has been working on for all these years.

A related HN post from a few weeks ago:
[https://news.ycombinator.com/item?id=6605015](https://news.ycombinator.com/item?id=6605015)

------
swombat
Solving all those problems should take a team of 10 or so scientists no longer
than 2 months at the outset.

~~~
eli_gottlieb
Is this a joke about that 1956 conference?

~~~
swombat
Aye.

(Ai?)

But I forgot the exact reference
([http://en.wikipedia.org/wiki/Dartmouth_Conferences](http://en.wikipedia.org/wiki/Dartmouth_Conferences))
so I will now edit my original post to include the right numbers...

------
seiji
What would it do to the nature of startups? Startups would cease to exist. The
first one to AI wins.

~~~
mbq
Er, you're assuming that this strong AI would be smarter than a human brain,
what is not at all obvious.

~~~
PeterisP
From the evidence of how fast new botnets spread, and the current prices of
0-day vulnerabilities we can assume that any strong-AI can easily get access
to immense computing power (percentage points of globally installed CPU's).

The difference is that it can use them not for sending low-return spam, but
for multiplying its power - even if it doesn't give a qualitative difference,
there should be a quantitative difference (how many different tasks it can do
at once) when a strong AI gains many orders of magnitude more computing power
than any laboratory/datacenter where it was initially created.

~~~
VLM
Latency and bandwidth are going to suck and that might be important.

There's an assumption in most AI discussions that AI "must" be embarrassingly
parallelizable. For example, much as my brain is parallelized, surely an AI
would have to be the same way, much as all heavier than air human flight
required was a faithful reproduction of bird anatomy (LOL).

There's also a certain assumption that intelligence implies a level of self
awareness that most humans don't have therefore AI would be highly self
aware...

There's also a whole industry of sophistry devoted to proving no humans are
intelligent and IQ as a concept or any other numerical measure of intelligence
does not exist and there is no way to compare intelligence, and right or wrong
those folks will surely make life difficult for people improving / testing /
upgrading an AI, either because they're right or they'll be making political
protests.

------
j2kun
Talking about "the nature of startups" seems pretty shallow in comparison.
It's like asking "how close are we to a Mars landing? What could it do to the
nature of startups?"

~~~
invalidOrTaken
"How will the landing affect the social media landscape?"

------
jackylee0424
Make me think about one of the classic (but imaginary) debate in StarTrek:
Measure of a Man [1]

[1]
[http://www.youtube.com/watch?v=3PMlDidyG_I](http://www.youtube.com/watch?v=3PMlDidyG_I)

------
_random_
About time!

