Hacker News new | past | comments | ask | show | jobs | submit login

A question. It says:

>OpenAI conducts fundamental, long-term research toward the creation of safe AGI.

What does "safe AGI" mean?

Obviously most readers would immediately think of this: "AGI that won't enslave humanity, kill millions of people as part of its optimization process, crash planes, order drones attacks on civilian sectors, etc", and escape its "cage" (whatever it was supposed to be doing)?

But that seems like a strange and poorly-defined explicit goal. And kind of early to be putting into an announcement like this. Does it really mean that - or does it mean something else, more specific - and if so, what?

I would be interested in knowing what the person who wrote that word had in mind, since I think most people would think of the Terminator series - Skynet - or The Matrix, etc, when it comes to AGI.

----

EDIT: To elaborate on why we should define safe! I know what "safe" means when we say "A memory-safe programming language".[1] It's very specific. In that sentence it doesn't have anything to do with enslaving humanity, nor does anyone think it does. Here's are some articles on this exact subject: https://www.google.com/search?q=memory+safety

Further, it's pretty obvious what we mean when we say "a safe autonomous vehicle" because whether an accident occurs is pretty cut-and-dried. We have gray areas, for example is a vehicle "safe" if it's driving under the speed limit and gets into an accident through no fault of it's own, however if it had had advanced knowledge of all other vehicles heading toward the same intersection (regardless of visibility) it would not have been in that accident? Clearly a car that slows down through knowledge a human driver wouldn't have can be safer than another kind of car. But we still understand this idea of "safety" when it comes to cars.

But what does "safe" mean when we say "creation of safe AGI"?

It must mean something to be in that sentence. So why and how can you apply the word "safe" to AGI? What does it mean?

[1] even has a complete Wikipedia article: https://en.wikipedia.org/wiki/Memory_safety




Have you tried looking for what they mean by safe AGI?

Ex: https://blog.openai.com/concrete-ai-safety-problems/


No, I wasn't aware of that. This is a great link, thanks!

To anyone else here is their excellent 2016 paper linked from the above:

"Concrete Problems in AI Safety"

https://arxiv.org/abs/1606.06565

and direct link to 29-page PDF:

https://arxiv.org/pdf/1606.06565.pdf [PDF!]

However, that PDF does not mention "AGI" even a single time, except in reference 17 "The AGI containment problem". That is actually also avaialable online here - https://arxiv.org/abs/1604.00545 but doesn't seem to be what they have in mind.

So it seems the use of the term "safe" is actually much more narrow in AI literature and probably in the mind of the person who wrote that sentence, than I as a lay reader thought of reading it applied to AGI. It's an interesting idea.


Other terms that will help in your search are Friendly AI and the goal alignment problem.

Check these out: https://intelligence.org/2016/12/28/ai-alignment-why-its-har...

https://wiki.lesswrong.com/wiki/Orthogonality_thesis


If I used the term, I would reply, "A safe powerful AGI is one with a less than fifty percent chance of killing more than one billion people." (Because most of the work is pushing the failure probability sigmoid from 0.999+ to below 0.5, and people will disagree on how much lower than that it could be pushed.)


Their blog post (https://blog.openai.com/concrete-ai-safety-problems) seems to have a starkly different idea of safety.


It's a good question(without the flare) There seems to be lots of work on the AI part. Is the safe part access to everyone? AI seems like a tool that could have good and bad uses. adding a G doesn't by itself make it safer. So, what things should we be doing to make it safer.

giving guns to terrorists doesn't make us safer. I suspect that giving AGI to terrorists wouldn't make us safer either.

Don't get wrong I'm all about cool new toys. And I get that FANG + China are pushing ahead with weaponized AI and/or AGI with or with out openAI.


Thanks. Your sibling post answered the question with a link.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: