Hacker News new | comments | show | ask | jobs | submit login
AMA: the OpenAI research team (reddit.com)
130 points by sushirain 742 days ago | hide | past | web | favorite | 41 comments



I missed the AMA, but glad to see it here and it looks like there are some team members monitoring this thread so I'll throw some questions out there.

I did my MS in AGI at the National Intelligence University with Ben Goertzel as my outside research advisor. My thesis was to determine what were defense implications of an AGI and who was making the best progress toward actually building one.

Since then the DoD has started to take an interest in AGI, and in fact today, during my one weekend a month drill at the Pentagon, I had a great long conversation with Maj. General Seng [1] who is heading up efforts around implementing ISR systems with more autonomous capabilities and exploring how an AGI would be utilized in defense.

One of our big open questions is what's the "stack" for AGI, conceptually? I didn't come to any conclusion on this and had to make a lot of assumptions to close out the research. I would be curious to hear the OpenAI team's thoughts on it.

Will you all be coming to AGI 16 this year in New York?

[1] http://www.af.mil/AboutUs/Biographies/Display/tabid/225/Arti...


I know you don't particularly subscribe to the "AGI is extremely dangerous" line of thinking, but let's say for the sake of argument we're a decade in the future and it's starting to become very evident that it in fact is dangerous—like in a recursively self-improving nightmare type way—and it could be realized via a few thousand AWS instances with the application of some very specialized knowledge.

What do you imagine the U.S. Government's reaction might be?


I know you don't particularly subscribe to the "AGI is extremely dangerous" line of thinking

I think the question in my mind is "dangerous to who and when?"

Is AGI an existential threat? Probably. But on what time horizon? Through what mechanism? And can we evolve and collaborate with AGI instead of de-facto competing with it?

None of the AGI warning people (Bostrom, Yudkowski, Barrat et al.) have come up with a plausible chain of events that leads to human irrelevance or extinction. They always make a few assumptions and then claim "and then exponential growth happens" and boom everyone's a paperclip.

The USG doesn't have a position at this point and it's ill prepared to react. In reality if there is some kind of world ending gray goo scenario that an AGI creates - nothing the US or any other Government can do will matter. I think that's about as likely as Roko's Basilisk being true though (eg. 0% chance).


>The USG doesn't have a position at this point and it's ill prepared to react.

That's what I thought. Kind of bizzare considering USG usually tends to stay far ahead of the curve when dealing with technology that has potentially profound national security implications. Yet at the same time, programs such as DARPA SyNAPSE exist (though only at moderate funding levels).

>In reality if there is some kind of world ending gray goo scenario that an AGI creates - nothing the US or any other Government can do will matter.

For sure. My hypothetical was intended more as taking place prior to any such disaster—in a world where AGI is viewed in the same light as WMDs, but where nothing catastrophic has yet happened. In that scenario, USG could potentially have great influence (for better or worse) over any outcome.

For my money, counter-proliferation measures would be ultimately worthless in that situation, except as a stop-gap to buy time for a larger project to solve AGI correctly.


Kind of bizzare considering USG usually tends to stay far ahead of the curve when dealing with technology that has potentially profound national security implications.

It's basically impossible to have a plan of response for something that nobody even knows how to build. That said, we have a lot crisis action plans that might apply depending on whatever actions are happening. It would likely fall into the realm of contingency plans that have "Complex electromagnetic environments" as core assumptions.


I know that this is completely off topic, but for the sake of argument, let's imagine that we are thirty years into the future, we have invented efficient teleportation and have colonized mars. What do you think the government should do to encourage family ties to remain strong when loved ones are living on another planet?


While I appreciate you mocking me, I can't help but disagree with the implication that my post was completely off-topic.

AndrewKemendo has conducted research into AGI on behalf of the military. My hypothetical was intended as a near-term scenario in which the technology proved far more dangerous than originally thought. Asking him how he thinks USG would react to such a scenario doesn't strike me as unreasonable given his background.


Did you really expect to get a response other than "It's basically impossible to have a plan of response for something that nobody even knows how to build"?

I'm just tired of hearing unproductive questions like this to which any response other than "we don't know" is literally science fiction. Andrew's response would have applied equally to teleportation.

Why don't we talk instead about how methods such as deep learning actually do work, and what problems that have been successfully applied to?


Why don't we talk instead about how methods such as deep learning actually do work, and what problems that have been successfully applied to?

Well we do, but it's clear to the community that we won't get to AGI with deep learning classifiers and systems alone. So the questions we are asking are "what would a system look like that results in X kind of behavior."

I don't disagree with your teleportation analogy either, but I think you weight it too heavily with impossibility. In fact there are serious people working on teleportation - at this point it's quantum state teleportation [1] but it's a start.

[1] http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.70....


I would like to point out that AGI is incredibly abstract and entirely theoretical right now (and philosophical, depending on the researcher, e.g. Bostrom). Deep learning is very engineering driven and very much focused on working systems that produce real world results. Even though there is some work being done on theory, it is very shaky.

As such, as far as deep learning is concerned there is no stack for AGI yet, because a lofty goal that is so far away from what is currently possible.


I was hoping that someone out there knew something I didn't on this, because that was the conclusion I came to in my research, that nobody even has a conceptual stack.

That said, I think OpenCOG has the closest to something of a larger conceptual stack in mind based on the atom-space approach. I think the folks over at Deep Mind might have some thoughts, and perhaps the numenta people as well.


I only take a hobby interest in this stuff but one approach is the one DeepMind has taken to learning video games described a bit here: http://webcache.googleusercontent.com/search?q=cache:http://...

Their system is I think an AGI as in artificial general intelligence system though not at human levels yet (apart from playing Breakout and Space Invaders). Here's Demis Hassabis talking about it https://www.youtube.com/watch?v=0X-NdPtFKq0&feature=youtu.be...

Stack is their Torch machine learning library partly https://en.wikipedia.org/wiki/Torch_(machine_learning)


>Since then the DoD has started to take an interest in AGI, and in fact today, during my one weekend a month drill at the Pentagon, I had a great long conversation with Maj. General Seng [1] who is heading up efforts around implementing ISR systems with more autonomous capabilities and exploring how an AGI would be utilized in defense.

Isn't that basically the most obviously bad idea ever, so terribly stupid that there have been several movies chronicling how massively bad an idea it actually is to build "AGI" for military goals?


This idea is cliché precisely because it's the most obvious course of action for the military. Don't forget that it's DoD who sponsored the previous AI summer.

Besides, if they don't do on it, the enemy surely will!


Because arms races and cold wars with we-all-die-grade weaponry are an obviously excellent idea!

/s


If you still have a military (or the need for one) after you build an AGI, you built it horribly, horribly wrong.


I think you have been watching too many movies.


Movies sometimes get this thing right though. Don't forget that Dr. Strangelove pretty much turned out to be a documentary!


So your answer is, "Sure, let's weaponize brains, nothing can possibly go wrong"?


We already weaponize brains.


Well no, mostly we don't, because the brains who happen to be the weapons still need a world to live in after they fight. The difference is that with artificial brains, hey, why would they give a crap what comes after the killing?


However, we mostly only arm brains that have an IQ of <120 and some human values baked in (e.g. empathy to other humans and living beings; a family to protect, or dependence on other social structures that demand pro-social attitudes).


Scott Alexander expressed some important concerns about the project[0].

Edited:

<del>So far they've managed to label them as "coming from the LessWrong background" and subsequently dismiss via appeal to a strawman Paperclip Maximizer. It doesn't give me much confidence in them.</del>

<ins>Nevermind. I didn't realize this comment was not made by an OpenAI representative. Also, we could use a strikethrough formatting tag on HN. 'dang?</ins>

I hope they eventually address those points though.

[0] - http://slatestarcodex.com/2015/12/17/should-ai-be-open/


To be clear, the following comment is what we wrote on the subject: https://www.reddit.com/r/MachineLearning/comments/404r9m/ama.... The subsequent replies are not affiliated with OpenAI.


Thanks, I missed that. I didn't notice the OpenAI flair.

I apologize for mistakenly assigning this comment to your group.


The person who referred to "LessWrong background" and talked about paperclip maximizers wasn't (at least, so it looks to me) mocking or dismissing.

The last paragraph of that comment does say "the LessWrong folks tend to be overly dramatic in their concerns" but goes on immediately to add "But they do have a point that the problem of controlling something much more intelligent than yourself is hard [...] and, if truly super-human intelligence is practically possible, then it needs to be solved before we build it".


1. Do you believe solving AI will come from a big company where the employees solving it would have little ownership of the company?

2. Currently, there is no solid test for Artificial General Intelligence. How much of a priority is it to create one?

-------------------------------------------------------

Answering my own questions:

1. No, the incentive is just not there.

2. Currently, this is my main goal related to Artificial General Intelligence. One should know how far or near they are from creating Artificial General Intelligence.


Currently, there is no solid test for Artificial General Intelligence. How much of a priority is it to create one?

There is of course no commonly agreed upon one, but in my research but the best one I found was the Universal Anytime Intelligence Test [1]

[1] http://users.dsic.upv.es/proy/anynt/measuring.pdf


How does one monetize their own individual breakthroughs if they develop something groundbreaking in AI and involved with OpenAI?


Any chance at asking questions here? Whoops, guess I just did...


waves


Do you believe there is a limit to what models we can train with layers of differentiable units?


Do you have info materials about OpenAI I could use in AI events?

Will you work with 3rd parties to rise awareness and support, recruit talent and external collaborators (e.g. research groups)?


May I suggest Kaggle?

The AllenAI Institute is currently running a pretty interesting contest there at the moment, with pretty direct links to OpenAI's mission.

Of course Kaggle often degenerates into a "who can drive xgboost[1] the best", but some of the feature engineering is quite novel.

A decent Winograd schema challenge would be pretty interesting.

[1] https://github.com/dmlc/xgboost#whats-new


No particular materials right now besides the website.

We expect to collaborate with many third parties over time. We're just getting started, so not sure on the specifics yet!


Okay. May I write you an email to establish the connection? I intended to contact OpenAI later this month, but now that we're talking...


Feel free to get in touch: gdb@openai.com!


I'll send you a mail tomorrow. Thanks.


Is anyone else irked by the questions about AGI? It's a bunch of fluff put forward by hacks. We're a long way off.


I think there are legitimate research questions and concerns about AGI; unfortunately there's also a lot of fluff surrounding the area (e.g. singularity stuff, doomsday scenarios, etc.).

The way I see it, there's only one conceptual barrier to cross between current AI and "AGI-like" technology, which could be summed up as 'models which take themselves into account'.

Whilst it's trivial to have software modify itself, we don't have good models for predicting what those modifications will do (in a way which is more computationally efficient than just running them). An analogy is how encoding programs as ANNs lets us perform gradient descent, which we couldn't do if we encoded them as e.g. string of Java code.

If we find a powerful software model which allows efficient prediction with white-box self-references, then I think lots of progress will be made quite quickly.


Considering it's explicitly called out in their charter [1]:

It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly.

and basically the entire reason the project exists in the first place is to try and get a technical handle on AGI risks. I would have to say it makes perfect sense.

We're only a long way off if we continue to put tiny dollar figures into it. You should be thinking about it more like the Nucelar issue. Would you rather more or less thought be put into it?

[1] https://openai.com/blog/introducing-openai/




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: