Yeah, kinda annoying, but at the same time challenging to come up with an idea of how to break out of the box. I'd go the humanitarian route being the AI.
I've reached it following links from the Alien Message submission on lesswrong.com
If he revealed that it's just (up to) 2 hours of him going 'let me out! let me out! let me out! let me out!' nobody would take him up on the challenge.
This repeatedly comes up, as if Eliezer Yudkowsky convincing someone to say "Ok you can go free" is somehow indicative of how difficult or easy it would be for a real AI to get let out.
Let me put it this way: If it were that easy, out in the real world, I'm fairly certain prisons would be impossible.
Keep in mind that the gatekeeper is not obliged to sit and listen to the AI-in-a-box. The gatekeeper is not even obliged to pretend it's a real AI-in-a-box scenario and make the decisions he would make in that situation; you can sit there thinking, nay, SAYING "man this is gonna be the easiest $10 i ever made" for the whole two hours and tough shit for the would-be-AI.
I continue to refuse to accept the possibility of Eliezer having won those two rounds without being paired with a completely retarded gatekeeper.
To the OP: you must have reached that article through the same sequence of links I followed when I (re-)discovered it this morning!