Hacker News new | comments | show | ask | jobs | submit login
OpenAI Research (openai.com)
124 points by aburan28 7 months ago | hide | past | web | favorite | 22 comments



On their "OpenAI Charter", they list several basic principles they'll use to achieve the goal of safe AGI, including this one, which I find pretty interesting:

>We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

If I'm reading that correctly, it means that later on if/when some company is obviously on the cusp of AGI, OpenAI will drop what they're doing and start helping this other company so that there isn't a haphazard race to being the first one, which could result in unsafe AGI. That sounds like a well-intentioned idea that could cause more problems in practice. For instance, if there are multiple companies with almost equal footing, then combining forces with one of them would give a sense of even stricter deadline to the other ones, possibly making the development even less safe.

Also, they only mention assisting "value-aligned, safety-conscious" projects, which seems pretty vague. Just seems like they should give (and perhaps have given) more thought into that principle.


We had in mind less a mechanical trigger ("project X is within Y years of AGI, let's drop everything and join them"), and more a broad commitment to avoid races, along with an invitation to other organizations to build relationships focused on ensuring a good outcome. In practice we plan to be in constant communication with other major AI orgs about these issues (in some cases we already are), and eventually we might hope to help build multilateral agreements that would avoid the kind of coordination issues you describe. This will be an ongoing process, playing out over years, with lots of details that need to be worked out. The charter simply announces our commitment to see this process through.

On "value-aligned, safety conscious" projects, we wrestled a lot with this wording, but we believe it's the best way to describe our important caveats. There has to be some level of malicious use at which we wouldn't be okay cooperating with a project. And there has to be some level of neglecting safety considerations that would also make it unethical to cooperate. Our message here is that aside from these caveats, avoiding a race is the most important thing. In practice we expect (hope?) that will be many value-aligned, safety-conscious organizations, and again the conversation around these topics will play out over years rather than just being a random decision we make.

More generally, on both points our intention was to make a broad statement of values and intent, rather than to nail down precisely what actions OpenAI will take. The central document of an organization needs to be both short and flexible enough to remain relevant for many years, and that necessarily means sketching a broad framework and leaving the details to be filled in later. That said, you should expect us to fill in many of these details over time, both in explicit documents and in our actions. In fact, we are building a policy team that is focused on these issues, and it's hiring: https://jobs.lever.co/openai/638c06a8-4058-4c3d-9aef-6ee0528...


The problem is they're trying to filter the production of a group of unrelated organizations, without any recognized authority to do so. The vagueness of the principle reflects the intractability of the goal, in the current environment.


It's not even clear how to evaluate whether anyone "comes close to building AGI".

Have they defined what "AGI" is supposed to be? I can't find it on their website.


For anybody else excited when they hear "open" and "AGI" in the same sentence, if you don't know the OpenCog project, the wiki (especially if downloaded in book form) makes for fascinating reading:

https://wiki.opencog.org/w/The_Open_Cognition_Project


By book form you mean PDF form?


A question. It says:

>OpenAI conducts fundamental, long-term research toward the creation of safe AGI.

What does "safe AGI" mean?

Obviously most readers would immediately think of this: "AGI that won't enslave humanity, kill millions of people as part of its optimization process, crash planes, order drones attacks on civilian sectors, etc", and escape its "cage" (whatever it was supposed to be doing)?

But that seems like a strange and poorly-defined explicit goal. And kind of early to be putting into an announcement like this. Does it really mean that - or does it mean something else, more specific - and if so, what?

I would be interested in knowing what the person who wrote that word had in mind, since I think most people would think of the Terminator series - Skynet - or The Matrix, etc, when it comes to AGI.

----

EDIT: To elaborate on why we should define safe! I know what "safe" means when we say "A memory-safe programming language".[1] It's very specific. In that sentence it doesn't have anything to do with enslaving humanity, nor does anyone think it does. Here's are some articles on this exact subject: https://www.google.com/search?q=memory+safety

Further, it's pretty obvious what we mean when we say "a safe autonomous vehicle" because whether an accident occurs is pretty cut-and-dried. We have gray areas, for example is a vehicle "safe" if it's driving under the speed limit and gets into an accident through no fault of it's own, however if it had had advanced knowledge of all other vehicles heading toward the same intersection (regardless of visibility) it would not have been in that accident? Clearly a car that slows down through knowledge a human driver wouldn't have can be safer than another kind of car. But we still understand this idea of "safety" when it comes to cars.

But what does "safe" mean when we say "creation of safe AGI"?

It must mean something to be in that sentence. So why and how can you apply the word "safe" to AGI? What does it mean?

[1] even has a complete Wikipedia article: https://en.wikipedia.org/wiki/Memory_safety


Have you tried looking for what they mean by safe AGI?

Ex: https://blog.openai.com/concrete-ai-safety-problems/


No, I wasn't aware of that. This is a great link, thanks!

To anyone else here is their excellent 2016 paper linked from the above:

"Concrete Problems in AI Safety"

https://arxiv.org/abs/1606.06565

and direct link to 29-page PDF:

https://arxiv.org/pdf/1606.06565.pdf [PDF!]

However, that PDF does not mention "AGI" even a single time, except in reference 17 "The AGI containment problem". That is actually also avaialable online here - https://arxiv.org/abs/1604.00545 but doesn't seem to be what they have in mind.

So it seems the use of the term "safe" is actually much more narrow in AI literature and probably in the mind of the person who wrote that sentence, than I as a lay reader thought of reading it applied to AGI. It's an interesting idea.


Other terms that will help in your search are Friendly AI and the goal alignment problem.

Check these out: https://intelligence.org/2016/12/28/ai-alignment-why-its-har...

https://wiki.lesswrong.com/wiki/Orthogonality_thesis


If I used the term, I would reply, "A safe powerful AGI is one with a less than fifty percent chance of killing more than one billion people." (Because most of the work is pushing the failure probability sigmoid from 0.999+ to below 0.5, and people will disagree on how much lower than that it could be pushed.)


Their blog post (https://blog.openai.com/concrete-ai-safety-problems) seems to have a starkly different idea of safety.


It's a good question(without the flare) There seems to be lots of work on the AI part. Is the safe part access to everyone? AI seems like a tool that could have good and bad uses. adding a G doesn't by itself make it safer. So, what things should we be doing to make it safer.

giving guns to terrorists doesn't make us safer. I suspect that giving AGI to terrorists wouldn't make us safer either.

Don't get wrong I'm all about cool new toys. And I get that FANG + China are pushing ahead with weaponized AI and/or AGI with or with out openAI.


Thanks. Your sibling post answered the question with a link.


Best thing I want to get out of OpenAI right now is 10v10 Dota 2 Pros versus Bots All-star match. Wonder if they got anything out of last year's data from all the pros playing with their bot...


Or csgo, i think there is more teamplay than dota lol, only thing is need to take out the aim, shooting stuff


I would be way more impressed if there was an AI that can play Overwatch. It's one of those games that are fundamentally hard for computers because of meta-gaming, map differences and ability combinations. It would be much harder for AI to converge on some single game-breaking trick, or just bruteforce possible strategies through blind trial and error and memorization.

Also, aim doesn't matter as much, so you wouldn't have to introduce some artificial restrictions. (Reaction time does give you a big advantage, though. You can counter a lot of things simply by reacting to voicelines in time.)


It seems like the website is getting ready for an announcement.


How so?


It's the same content with a new layout. People usually do this when they're expecting a lot of traffic on their site, which typically corresponds to announcements.


I'm not a fan of this website design, it strikes me as as an attempt to look extra fancy. AI is more human than that.


As a counter example I am a fan of this website design. It's nice, readable and does not distract.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: