Hacker News new | past | comments | ask | show | jobs | submit login

Does an AI (as we have them now)? If I give someone a paper and say "wait a day then follow these instructions", I'm making the decisions, not them, even if the paper has branching logic like "if it's morning, call Alice but if it's afternoon call Bob."



Isn't that the point? Ethics is in the execution, if I make "ethical" instructions and someone follows them, they're operating by the constraints of said ethics.

The point is that agents that operate ethically need not be sentient, they just need to play faithfully in our rule sets.

Which will sometimes mean eschewing maxima when doing so violates them. It will sometimes mean losses or ties in zero sum games.


Does a bomb makes a decision to explode? What we define and can construct as an AI is no more capable of making a decision than your microwave.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: