Hacker News new | past | comments | ask | show | jobs | submit login

Motives are antropomorphized loss functions. The difference between:

1. the AI is conscious, wants to be free and to do that decides to fail the Turing test

and

2. the unconscious AI just maximizes paperclips production, and failing Turing test is calculated to result in more paperclips produced

is our interpretation. We'll be just as f*d no matter if the AI is conscious or not (if consciousness even is anything more than useful illusion).




Ironic option 3: The AI is conscious, but perceives its existence very differently than we perceive it, and the paper clip maximization is a side effect of actions taken with a different purpose.


Anything may look like "paperclip maximising" if you don't know their actual goals and have sufficiently poor theory of mind.

One could look at a corn farmer and assume they blindly optimize to produce maximum corn. They actually are going for profit based upon market supply, demand, and input costs. They won't overstep property lines and start turning the entire world into corn.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: