
A 'Big Red Button' for AI to interrupt its harmful sequence of action [pdf] - auza
https://intelligence.org/files/Interruptibility.pdf
======
shrugger
I feel like an actual solution for this is more literal.

I mean, if a box-carrying robot actually hurts someone, then the proper course
of action is to detonate something inside it, and completely immobilize it,
cut it off from whatever it's networked to, and preserve everything else so we
can dissect what went wrong and prevent it.

This is a really interesting approach but I think a much simpler method would
be a bit more practical in actual usage.

~~~
auza
I like this solution much better when facing less severe situations. Keep in
mind that it's also a solution for learning, so it can be used to guide the AI
towards the proper action, instead of baby sitting it and redesigning it
forever.

~~~
shrugger
Which is what is supposed to be happening anyways. That's literally what
machine learning is about.

