The tests referenced in [https://arxiv.org/ftp/arxiv/papers/1604/1604.04315.pdf], currently dominated by information retrieval techniques, seem more realistic and still feel hopelessly far away.
#3 should also allow looking into the use of AI to break into systems, in addition to detecting and defending against AI breaking into systems. A prototype for #4 could create an environment where trilobyte fuzzers could co-evolve into fearsome Artificial intelligences. The AIs that break into systems need not be as smart as the systems being broken into. Just as viruses are much less complicated than eukaryotic cells and yet are capable of wreaking great havoc against mammals, might this be a possible mad deterrent against out of control AI, developed by #1's opponents who use #2's breakthroughs? No AI could plausibly be bug free.
See also: Schild's Ladder.
#4 It would also be cool if humans were allowed to visit and interact with this virtual world.
See also: The Lifecycle of Software Objects
These projects have the same feel as: A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE, whose #4 and #1[+] were the only ones to see much progress. It's difficult to say whether we are 10 or another 50 years away from making meaningful progress on OpenAI's list, but I'm glad they made it because it seems somewhere along the past 60 years, we forgot how to dream.
[+] #3 warrants a honorable mention.