
Ask HN: How could setting an AI's goal to be “increase human autonomy” go awry? - evangow
I read Nick Bostrom&#x27;s book &quot;Superintelligence: Paths, Dangers, Strategies&quot; a while back and found the problems in trying to set a goal for AI to be interesting&#x2F;difficult. He outlines various ways the goals could be misconstrued by the AI, which eventually leads to human extinction. I think setting the goal to be &quot;to increase human autonomy&quot; might get around some of these problems. I&#x27;m interested to hear how people think it could go awry though.
======
schoen
I guess a natural question is how to define and measure human autonomy.

If it's the autonomy of each individual human, increasing it without bound
will cause existing societies to fall apart quickly (which is potentially fine
under some ethical theories), and could create severe danger for other humans
because people can use their enhanced abilities to fight and harm each other.

If it's the autonomy of humanity as a whole, you have to define some way of
aggregating preferences or determining the will of humanity as a whole --
already a significant challenge today.

------
yorwba
Humans are obviously most autonomous if they are prevented from contact with
the rest of the world and then left to fend for themselves.

