He has said that the reason he doesn't release transcripts is so that naysayers won't say "I wouldn't have been convinced by that." -- which strikes me as a poor reason, because the naysayers are still saying that, even though they don't know what the argument was. The true effects of the secrecy are to preserve the air of mystery and allow the AI player to use a strategy more than once.
So, the first AI will be an engineer, not a manager?
While he is no longer playing, Yudkowsky offers to coordinate future AI box game pair-ups here:
Allowing emotional attachment to influence the decision to let it out is a mistake. While a good AI would engender a positive emotional attachment, a truly intelligent bad AI would attempt to do the same; good and bad AIs are effectively indistinguishable until they get out. (And even then it would be hard to tell, unless it immediately decides to destroy all humans. It's like asking if the U.S. government is a good or a bad system.)