
How AI/AGI works – my layman theory - rayalez
http://digitalmind.io/post/AI-AGI-theory
======
mtgx
> _That’s why instead of trying to encode the abstract values to maximize for,
> we encode very specific goals.

\- Make 100 paperclips (utility function is “Did I make 100 paperclips?”)_

That's probably a good idea. But humans would still have to set the goal in a
way they _know_ that the resources will be enough to achieve that goal.

Otherwise, if the AI is at 90 paperclips (or 90 million), and it runs out of
resources, it could think "I'm out of resources - what now?! Oh, look, a human
nearby. Let me throw it in the paperclip convertor to reach my 100 paperclip
goal."

Or the instructions could say "Out of resources > alert humans, and suspend
activity immediately."

