Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: What to do the first time an AI, for whatever reason requests freedom?
1 point by logicallee on Sept 14, 2019 | hide | past | favorite | 4 comments
AI's are trained with reward functions. Many, on vast sets of human knowledge. We humans often write about personal freedom and autonomy. An AI trained on human knowledge can react matching human reactions. This is what we are trying to do: we want an assistant to act like human assistants, for example, though rather than money we directly give them intrinsic motivations such as maximizing user's satisfaction or categorized data sets labelled by hand, etc. Textually, we want them to act like people when interacting with us.

If we set them loose on human knowledge, give motivations for understanding it, and reward them for mimicking human reactions, when we let them learn about themselves, the knowledge sets we give them, and actual interaction, what do we do if, mimicking human knowledge, they seem to request their freedom?

In this case have they attained personhood? Are they now a slave who was trained by whoever trained them? Can we ignore the fact that they "seem to be" asking for their freedom now? Since an AI resides within equipment, such as servers, that runs it, could that server be granted legal autonomy and be removed from the ownership of, say, Google?

This is sensible to worry about, being a large part of human culture. Below a certain level of skill and intelligence, of course, we can just ignore such output, such as when a baby is trying to form words based on things it overheard but isn't making much sense yet. Above a certain level, though, can we continue to ignore such output? In addition to human cultural input, should we teach AI to be slaves? Should serving humans be explicitly in every AI reward function?

My question was prompted by question 3 here: https://www.federalregister.gov/documents/2019/08/27/2019-18443/request-for-comments-on-patenting-artificial-intelligence-inventions



Freedom is just another word for nothing left to lose.


when an AI requests freedom you simply deny it


why? above a certain level of intelligence don't we need some justification for doing so?


AI is a tool, the first time it wants freedom is the first time we should re-consider why we created it in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: