

Computer Scientist Leads Way to next Revolution in Artificial Intelligence - dacilselig
http://www.sciencedaily.com/releases/2012/04/120402113038.htm

======
bitwize
Last time I was at UMass Amherst it was Bong Day or something, and people were
out blithely blazing up on the lawn of the university square.

Fittingly, this article reads like a stoner's ramblings. It doesn't get into
what exactly a "Super-Turing" machine is capable of that a Turing machine is
not. Some googling turned up some theoretical possibilities (oracles and such)
which do not appear to be physically buildable.

~~~
bunderbunder
_It doesn't get into what exactly a "Super-Turing" machine is capable of that
a Turing machine is not._

I'm not great at interpreting technobabble, but I think it means everything in
NP-Hard is now solvable in linear time.

Or something.

~~~
aqme28
That isn't right. Super-Turing just means that it has capabilities a classical
Turing machine does not[1], which is why this article is so terrible. What are
these extra capabilities it does and how does it do them?

[1]: <http://en.wikipedia.org/wiki/Hypercomputation>

~~~
bunderbunder
Yeah, that's what I was trying to poke fun at. Me bad at jokes, apparently.

------
lisper
Here's the Science paper that describes super-Turing computation:

<http://binds.cs.umass.edu/papers/1995_Siegelmann_Science.pdf>

I haven't had time to do anything more than skim it. My initial bogometer
reading is not quite pegged, but it's close.

~~~
dacilselig
Thank you for posting the article. I was having some difficulty finding more
information.

------
jandrewrogers
The article did not make sense so I dug up some of the papers in question.
Those do not make sense either.

It appears to be a poorly executed and overstated attempt at (re-)discovering
high-order inductive computational models. These are equivalent to Turing
models unless you happen to have a hypercomputer at your disposal.

The endless parade of low-quality claims like this is why no one takes AI
research seriously, even the quality work.

~~~
ppod
"no one takes AI research seriously"

Really? You don't think modern AI research has produced any serious results?
Do you know how many applications machine learning methods are used in today?

------
digamber_kamat
I just hate the headlines. In one of the first lectures in my post grad school
our prof. told us how to critically examine research papers and claims. He
said "If anyone claims to come up with something "revolutionary" you should be
10 times more skeptical about his claims. It is very likely that the person
probably doesn't know what he is talking about."

Of course in these cases the news reporters are to be blamed than the
scientists who worked on the proble,.

~~~
Jun8
"This model is inspired by the brain," is another red-flag to me.

~~~
ppod
Why? Biologically inspired ANNs have been around since the 50s. What's the
problem with trying to model neural computation?

~~~
Jun8
Because it's such a buzzword and also the things that are used in practice are
barely near modeling actual neural computation in the _Drosophilia_ brain, let
alone the human one ([http://www.scientificamerican.com/article.cfm?id=brain-
in-a-...](http://www.scientificamerican.com/article.cfm?id=brain-in-a-box-
project)).

------
Dn_Ab
The article is very confused but I think what it is trying to say the
researchers are building a Hardware based Recurrent Neural Network that
operates with Real Numbers (i.e. analog). Hence it will be exponentially more
powerful than a turing machine. The possibility of being able to build a
physical device that can harness the reals to infinite precision is a _big_
assumption.

So this device is an example of a Hypercomputer. Its existence would disprove
the Chuch-Turing Hypothesis. It would also be very difficult to verify that it
was actually calculating what it claimed [1]. As a hypercomputer it could by
definition solve the halting problem. It is also more powerful than a quantum
computer.

Simple argument: a turing machine can _inefficiently_ simulate a quantum
computer. By definition a turing machine cannot simulate a hypercomputer. It
could hence compute incomputable functions. And since it could tell whether a
program will halt it could compute Chaitins constant. One consequence of that
is the resolution of twin prime, goldbach's conjecture and other open number
theory problems. It should also be able to compute Solomonoff's Universal
prior and hence act in a bayes optimal manner. Strong AI.

If what is claimed can be done then this is a very big deal.

Some other implicit or explicit arguments in this article:

\- Human brain is more than a turing maching

\- turing machine will not be able to realize AI

\- _"Classical computers work sequentially and can only operate in the very
orchestrated, specific environments for which they were programmed."_

I do not buy any of those arguments. And am not sure what that last quote is
supposed to mean.

[1] <http://www.complex-systems.com/pdf/18-1-6.pdf>

<http://www1.maths.leeds.ac.uk/~pmt6sbc/docs/davis.myth.pdf>

------
superturing
From an earlier paper:

> such super-Turing capabilities can only be achieved in cases where the
> evolving synaptic patters [sic] are themselves non-recursive (i.e., non
> Turing-computable)

"Interactive Evolving Recurrent Neural Networks are Super-Turing", 2012,
Jérémie Cabessa <http://jcabessa.byethost32.com/papers/CabessaICAART12.pdf>

So, create a neural network that changes after a non-Turing-computable pattern
and its output might not be Turing-computable.

------
anandkulkarni
Wish there were a bit more information here. The article's a little breathless
without telling us exactly how this is practically improves on ANNs and the
Turing model, and I couldn't find a more accurate representation of this
paper.

Hypercomputation models that depend on things like infinite-precision real
numbers have been around for a while, including in Siegelmann's work, so I'm
curious to know what specific advance is being reported here in "Neural
computing".

~~~
sp332
Edit: never mind, I thought she was talking about a real machine until I read
the paper. Even the Turing machine model assumes infinite storage space. So
there must be more to her "super-turing" machine than just infinite storage.

I wonder if you could just build a ARNN and "fudge" the infinite precision
with a reasonably large precision? Or with a big disk, you could compute and
unreasonable amount of precision :) Or, store the infinite precision in a lazy
way that only calculates as much as you need for a particular answer.

~~~
tikhonj
The thing is that you could do all that on a Turing machine, so any model just
"fudging" infinite precision would be equivalent in power to a Turing machine.

------
ajays
Meh. I'll wait till this research is published in AAAI, IJCAI, IEEE PAMI or
something of that caliber, and not "Science Daily".

~~~
ppod
It's published in Neural Computation, it says so right there in the article.

------
redthrowaway
For anyone else who's slightly miffed by the presence of three columns, only
one of which contains the actual story:

<http://www.readability.com/articles/dbycne79>

