

The AI box experiment - flashingpumpkin
http://yudkowsky.net/singularity/aibox

======
wlievens
What annoys me is that the transcripts have never been revealed.

To the OP: you must have reached that article through the same sequence of
links I followed when I (re-)discovered it this morning!

~~~
flashingpumpkin
Yeah, kinda annoying, but at the same time challenging to come up with an idea
of how to break out of the box. I'd go the humanitarian route being the AI.

I've reached it following links from the Alien Message submission on
lesswrong.com

~~~
wlievens
Same for me.

------
jackchristopher
There are two good old threads on HN about this:

Ask Eliezer Yudkowsky: How did you convince the Gatekeeper to release the
potentially genocidal AI?: <http://news.ycombinator.com/item?id=195959>

My theory on Eliezer Yudkowsky's AI-Box Experiment
<http://news.ycombinator.com/item?id=327427>

------
psygnisfive
This repeatedly comes up, as if Eliezer Yudkowsky convincing someone to say
"Ok you can go free" is somehow indicative of how difficult or easy it would
be for a real AI to get let out.

Let me put it this way: If it were that easy, out in the real world, I'm
fairly certain prisons would be impossible.

Keep in mind that the gatekeeper is not obliged to sit and listen to the AI-
in-a-box. The gatekeeper is not even obliged to pretend it's a real AI-in-a-
box scenario and make the decisions he would make in that situation; you can
sit there thinking, nay, SAYING "man this is gonna be the easiest $10 i ever
made" for the whole two hours and tough shit for the would-be-AI.

I continue to refuse to accept the possibility of Eliezer having won those two
rounds without being paired with a completely retarded gatekeeper.

------
wglb
Fascinating idea, and quite a bit more interesting, and with obviously higher
stakes than the Turing Test.

