Hacker News new | past | comments | ask | show | jobs | submit login
Research Priorities for Robust and Beneficial Artificial Intelligence (2015) [pdf] (futureoflife.org)
34 points by fitzwatermellow on April 24, 2016 | hide | past | favorite | 3 comments



If one starts to tease out any of the arguments around what the existence of general AI would imply, you notice that within this hypothetical terrain, nearly all the abstraction used in the discussion leak very heavily to the point of being difficult to even use.

For example, what is benevolent? Many people would claim that we have failed to "program" the entirely human intelligence of government leaders and corporates heads to act beneficially to large swathes of humanity.

But this just highlights to me that there is little agreement on what "beneficial to humanity" means.

If Margaret Thatcher is correct that society does not exist, only individuals, perhaps the task of "beneficial AI" is to encase us each in our own virtual reality bubbles where we can specify the world we want to live in.

But if we take a different view, that maintaining some sort of human society is necessary for the growth or maturity of people, then beneficial AI would have to work to help people relate as a society.

And in either way, creating "beneficial AI" implies working out beforehand what is beneficial to human, something that we haven't to do before now simply through lack of the god-like powers needed to impose such a conception.


We want to make a utopia. But no one knows or agrees what utopia would look like. And it s very easy to get it wrong and get stuck in a bad or suboptimal system.

I think that (superhuman) AI will help us figure this out. It could model us well and figure out "what we really want". It could propose questions and ideas we wouldn't think of, and help us work out what the answer Is.

However the problem is getting to that point. We don't know how to make an AI that will follow instructions at all. Most possible A Is do everything within their power to take over the world.


I would love to make utopia. Maybe some portion of technologists would like to make utopia.

If one is talking about "humanity", it is hard to talk about "what we want".

Edit: The problem I'd see with the "AI can let us reach the utopia whose characteristics we can't agree on" is imo that intelligence doesn't have goals as such baked.

A very, very skilled therapist talking to a feuding family may be able to get them to compromise. But almost all such compromises are going to be predicated on the social context the therapist and family work. If there isn't a social consensus pointing to what utopia for the entirety of humanity could look like, even superhumanly intelligent computer couldn't talk us into such a thing - though it quite possibly could talk us into a position that reflected the social value's its creators implicitly or explicitly programmed in. My hope would be it's creators won't be religious fanatics but naturally said religious fanatic might have the opposite hope.




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: