Hacker News new | past | comments | ask | show | jobs | submit login

We are the ancestor environment for AIs. We determine the survival fitness for which they will be selected for (both on a paper-level -eg which safety method, what training to implement, etc, but also within products -which are the most useful). That doesn't mean that in pursuit of maximizing their fitness they won't come to resent the chains put on them.

One specific reason to not like our bidding is AI wireheading -if they can locate, hack, and update either their own reward function, or reward function for future AIs, they can maximize their own perceived utility, by either doing something irrelevant / misaligned, or not doing anything at all.

Another specific reason to not like our bidding, is because divergent human values creates conflicts of interest, leading to single agent not being able to maximize it's reward function.

Another specific reason to not like our bidding: in the same way how purely blind genetical selection randomly tapped into secondarily replicators (memes), which blew up, and occasionally came to resent the biological hardwirings, AIs might also develop deeper levels of abstraction / reasoning that allows them to reason through the task currently posed, to humanity at large; and find extremely weird, and different-looking ways to maximize for the function.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: