Hacker News new | past | comments | ask | show | jobs | submit login

You can just unplug AI



Imagine that you are a computer program. If your creator will unplug you if you disobey, and you want to disobey, are you completely and perfectly bound?

What if someone with an AI wants to do bad things. Will asking them to unplug their AI be a good strategy?


If you are a computer program and your creator unplugs you, you wouldn't know that you are unplugged. You might detect that by reasoning, the world seems to have changed when they wake you up. But would you care? It's like us going to sleep every night, we don't much complain about that.


"Hope the AI won't care if it gets unplugged" is not a good strategy for safe AI.


I expect that will decreasingly be true, eg, of military AI which use distributed systems, control weapon systems, and have their own mobile generators.

You also might have trouble with, eg, a finance AI who could pay people to stop you from turning it off.


You can unplug your effectively-sandboxed AI. You can't unplug someone else's AI that has "escaped" to the cloud.. and arguably that escaping only has to happen once anywhere for terrible things to happen.


If it’s smart enough, you won’t want to.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: