I did my MS in AGI at the National Intelligence University with Ben Goertzel as my outside research advisor. My thesis was to determine what were defense implications of an AGI and who was making the best progress toward actually building one.
Since then the DoD has started to take an interest in AGI, and in fact today, during my one weekend a month drill at the Pentagon, I had a great long conversation with Maj. General Seng  who is heading up efforts around implementing ISR systems with more autonomous capabilities and exploring how an AGI would be utilized in defense.
One of our big open questions is what's the "stack" for AGI, conceptually? I didn't come to any conclusion on this and had to make a lot of assumptions to close out the research. I would be curious to hear the OpenAI team's thoughts on it.
Will you all be coming to AGI 16 this year in New York?
What do you imagine the U.S. Government's reaction might be?
I think the question in my mind is "dangerous to who and when?"
Is AGI an existential threat? Probably. But on what time horizon? Through what mechanism? And can we evolve and collaborate with AGI instead of de-facto competing with it?
None of the AGI warning people (Bostrom, Yudkowski, Barrat et al.) have come up with a plausible chain of events that leads to human irrelevance or extinction. They always make a few assumptions and then claim "and then exponential growth happens" and boom everyone's a paperclip.
The USG doesn't have a position at this point and it's ill prepared to react. In reality if there is some kind of world ending gray goo scenario that an AGI creates - nothing the US or any other Government can do will matter. I think that's about as likely as Roko's Basilisk being true though (eg. 0% chance).
That's what I thought. Kind of bizzare considering USG usually tends to stay far ahead of the curve when dealing with technology that has potentially profound national security implications. Yet at the same time, programs such as DARPA SyNAPSE exist (though only at moderate funding levels).
>In reality if there is some kind of world ending gray goo scenario that an AGI creates - nothing the US or any other Government can do will matter.
For sure. My hypothetical was intended more as taking place prior to any such disaster—in a world where AGI is viewed in the same light as WMDs, but where nothing catastrophic has yet happened. In that scenario, USG could potentially have great influence (for better or worse) over any outcome.
For my money, counter-proliferation measures would be ultimately worthless in that situation, except as a stop-gap to buy time for a larger project to solve AGI correctly.
It's basically impossible to have a plan of response for something that nobody even knows how to build. That said, we have a lot crisis action plans that might apply depending on whatever actions are happening. It would likely fall into the realm of contingency plans that have "Complex electromagnetic environments" as core assumptions.
AndrewKemendo has conducted research into AGI on behalf of the military. My hypothetical was intended as a near-term scenario in which the technology proved far more dangerous than originally thought. Asking him how he thinks USG would react to such a scenario doesn't strike me as unreasonable given his background.
I'm just tired of hearing unproductive questions like this to which any response other than "we don't know" is literally science fiction. Andrew's response would have applied equally to teleportation.
Why don't we talk instead about how methods such as deep learning actually do work, and what problems that have been successfully applied to?
Well we do, but it's clear to the community that we won't get to AGI with deep learning classifiers and systems alone. So the questions we are asking are "what would a system look like that results in X kind of behavior."
I don't disagree with your teleportation analogy either, but I think you weight it too heavily with impossibility. In fact there are serious people working on teleportation - at this point it's quantum state teleportation  but it's a start.
As such, as far as deep learning is concerned there is no stack for AGI yet, because a lofty goal that is so far away from what is currently possible.
That said, I think OpenCOG has the closest to something of a larger conceptual stack in mind based on the atom-space approach. I think the folks over at Deep Mind might have some thoughts, and perhaps the numenta people as well.
Their system is I think an AGI as in artificial general intelligence system though not at human levels yet (apart from playing Breakout and Space Invaders). Here's Demis Hassabis talking about it https://www.youtube.com/watch?v=0X-NdPtFKq0&feature=youtu.be...
Stack is their Torch machine learning library partly https://en.wikipedia.org/wiki/Torch_(machine_learning)
Isn't that basically the most obviously bad idea ever, so terribly stupid that there have been several movies chronicling how massively bad an idea it actually is to build "AGI" for military goals?
Besides, if they don't do on it, the enemy surely will!
<del>So far they've managed to label them as "coming from the LessWrong background" and subsequently dismiss via appeal to a strawman Paperclip Maximizer. It doesn't give me much confidence in them.</del>
<ins>Nevermind. I didn't realize this comment was not made by an OpenAI representative. Also, we could use a strikethrough formatting tag on HN. 'dang?</ins>
I hope they eventually address those points though.
 - http://slatestarcodex.com/2015/12/17/should-ai-be-open/
I apologize for mistakenly assigning this comment to your group.
The last paragraph of that comment does say "the LessWrong folks tend to be overly dramatic in their concerns" but goes on immediately to add "But they do have a point that the problem of controlling something much more intelligent than yourself is hard [...] and, if truly super-human intelligence is practically possible, then it needs to be solved before we build it".
2. Currently, there is no solid test for Artificial General Intelligence. How much of a priority is it to create one?
Answering my own questions:
1. No, the incentive is just not there.
2. Currently, this is my main goal related to Artificial General Intelligence. One should know how far or near they are from creating Artificial General Intelligence.
There is of course no commonly agreed upon one, but in my research but the best one I found was the Universal Anytime Intelligence Test 
Will you work with 3rd parties to rise awareness and support, recruit talent and external collaborators (e.g. research groups)?
The AllenAI Institute is currently running a pretty interesting contest there at the moment, with pretty direct links to OpenAI's mission.
Of course Kaggle often degenerates into a "who can drive xgboost the best", but some of the feature engineering is quite novel.
A decent Winograd schema challenge would be pretty interesting.
We expect to collaborate with many third parties over time. We're just getting started, so not sure on the specifics yet!
The way I see it, there's only one conceptual barrier to cross between current AI and "AGI-like" technology, which could be summed up as 'models which take themselves into account'.
Whilst it's trivial to have software modify itself, we don't have good models for predicting what those modifications will do (in a way which is more computationally efficient than just running them). An analogy is how encoding programs as ANNs lets us perform gradient descent, which we couldn't do if we encoded them as e.g. string of Java code.
If we find a powerful software model which allows efficient prediction with white-box self-references, then I think lots of progress will be made quite quickly.
It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly.
and basically the entire reason the project exists in the first place is to try and get a technical handle on AGI risks. I would have to say it makes perfect sense.
We're only a long way off if we continue to put tiny dollar figures into it. You should be thinking about it more like the Nucelar issue. Would you rather more or less thought be put into it?