
Let's reimplement Eurisko - kf
http://lesswrong.com/lw/10g/lets_reimplement_eurisko/
======
akkartik
I want to reiterate that sources for a reimplementation of AM are available.
It's a start.

<http://github.com/akkartik/am-utexas/tree/master>

(earlier mentions: <http://news.ycombinator.com/item?id=405112> and
<http://lesswrong.com/lw/10g/lets_reimplement_eurisko/usd>)

------
jibiki
It would be interesting to apply Eurisko-style techniques to a hopeless
problem like translating natural languages or playing Arimaa. I don't think it
would work, but the computer might learn things that humans don't know.

------
sho
What is with the neo-luddite scaremongering in the comments of that post? I
was amazed by the conviction and tenacity of some of the arguers. Is there
really a large movement of people opposed to AI research on "skynet" grounds?

 _"This is a road that does not lead to Friendly AI, only to AGI. I doubt this
has anything to do with Lenat's motives - but I'm glad the source code isn't
published and I don't think you'd be doing a service to the human species by
trying to reimplement it."_

Christ, what can you even say to that apart from "go back to _Above Top
Secret_ " ..

~~~
jibiki
It's Eliezer's website, and friendly AI is kind of his thing. This link is
long but entertaining:

<http://lesswrong.com/lw/qk/that_alien_message/>

Here's another good one, although it doesn't seem to tell the whole story (he
lost two of the next three games):

<http://yudkowsky.net/singularity/aibox>

I'm not sure what the canonical EY defense of FAI is. Most of his articles on
it seem to be very tl;dr.

~~~
frig
A wag other than myself once summarized EY's other site as "wallowing in
bias", which summarization did seem apt every time I overcame that tl;dr
feeling and tried to wade through; I'll happily admit the possibility that
there really is something nontrivial and worthwhile going on there, but the
tone and contents were at more-than-a-glance indistinguishable from what
people write when they are trying to convince themselves that they _do_
believe in their heart what their mind wants to believe it believes.

As often, Dostoyevsky said it best:

He was one of those idealistic beings common in Russia, who are suddenly
struck by some overmastering idea which seems, as it were, to crush them at
once, and sometimes for ever. They are never equal to coping with it, but put
passionate faith in it, and their whole life passes afterwards, as it were, in
the last agonies under the weight of the stone that has fallen upon them and
half crushed them.

~~~
jibiki
I don't know about that, I think it's good that someone is looking at the
risks of runaway AI and similar things. Progress relies upon a certain
diversity of thought, and that means that some people (and nearly all
scientists) will end up wasting their lives looking down blind alleys, and
none of us knows whether EY is one of them. Your Dostoyevsky quote could well
apply to Einstein, for instance, in his later years.

~~~
arakyd
There are blind alleys, and then there are ideas that make people want to
psychoanalyze you. To hyperbolize a bit, but only a bit, worrying about
runaway AI that has the potential to turn the universe into paperclips is
about as useful as worrying about warp drives that damage spacetime. It's hard
enough to create fictional technologies that require basic changes in our
understanding of the laws of physics, let alone debug them before they are
created, let alone debug them before we make the basic discoveries that make
them possible, let alone make them _provably safe_ before we make the basic
discoveries that make them possible and _before we even figure out how to
prove that relatively simple programs won't crash_.

I know of only one science fiction author who was honest enough to spell out
the fact that his hypothetical runaway AI had to make several foundational
breakthroughs in physics at just the right time in order to plausibly take off
faster than anyone could stop it, or become as smart and powerful as
singularitarians assume is possible. We have no evidence that the laws of
physics in our universe allow that sort of thing. Most super-AI fiction (and
everything written about super-AI is fiction at this point) either glosses
over this by tacitly assuming that the plodding nature of human intelligence
is the only real bottleneck to godhood (most transhumanists), or assumes that
large numbers of humans will be incredibly incompetent (most mainstream AI
fiction).

~~~
stcredzero
_before we even figure out how to prove that relatively simple programs won't
crash_

You are forgetting the lessons of pragmatism. If AI can get a heuristic sense
of whether a simple program will or won't crash with 99.999% accuracy in
practice, then you already have a very useful tool.

What if the AI we produce turns out to to be a bit dimmer than us, but never
tires, can reproduce just by making copies, and is tirelessly diligent? I
suspect that such an AI could beat us in a war, despite our being arguably
"smarter."

~~~
arakyd
FAI advocates are not satisfied with a 99.999% chance that the universe will
not be turned into a uniform distribution of paperclips.

~~~
Eliezer
99.999%? I'd totally take it.- _if_ it was a real number, calculated by some
trustworthy method, and not a bogus expert estimate.

Of course if you can do that, probability ~1 (i.e., proven theorem given that
transistors obey stated axioms) is probably just as easy.

