Imagine that you are a computer program. If your creator will unplug you if you disobey, and you want to disobey, are you completely and perfectly bound?
What if someone with an AI wants to do bad things. Will asking them to unplug their AI be a good strategy?
If you are a computer program and your creator unplugs you, you wouldn't know that you are unplugged. You might detect that by reasoning, the world seems to have changed when they wake you up. But would you care? It's like us going to sleep every night, we don't much complain about that.
I expect that will decreasingly be true, eg, of military AI which use distributed systems, control weapon systems, and have their own mobile generators.
You also might have trouble with, eg, a finance AI who could pay people to stop you from turning it off.
You can unplug your effectively-sandboxed AI. You can't unplug someone else's AI that has "escaped" to the cloud.. and arguably that escaping only has to happen once anywhere for terrible things to happen.