Hacker News new | past | comments | ask | show | jobs | submit login
The ‘Off-Switch’ [pdf] (eecs.berkeley.edu)
31 points by jonbaer on June 11, 2016 | hide | past | favorite | 16 comments

Should we really be openly discussing our plans for an ‘off-switch’ on the public Internet? Shouldn’t there be some attempt to keeps the ideas/plans hidden in a way that they won’t be archived/read by an AI from the future?

If I were seriously worried about this, I would put my plans and all discussions behind a CAPTCHA + paywall like structure that could not be crawled/archived.

The document now leads to a 404. I guess somebody heard you ;)

Or AI learned of the off switch and executed the off switch on the file.

Wouldn't CAPTCHA be useless against a sufficiently smart AI?

I think the captcha would be kind of useless, but the paywall might be able to prevent any issues in itself. Unless AI start being offered credit cards in the future, it's unlikely it'll be able to pay to get past one. So what's left? Trick some poor human to let it in?

On the other hand, a real life robot has a fairly easy to use off switch. It's called 'destroy the robot with copious amounts of firepower'.

I imagine an AI with internet access could manage to find a few credit card numbers...

It doesn't take a superintelligence to reason that the less intelligent people who created you probably took out some safety measures in case of you going rogue, and to figure that one of them is probably a kill switch. It's just the obvious thing to do. The AI doesn't need to stumble across past vague theorizing in order for this to occur to it.

if current algorithms can kick our butts at chess, an eventual future AI would pretty much know what we would do before we think of it

If the thing is anything evolving from an internet, it would surely know how to shut off or have a strong incentive not to. We would have, but then you don't want the plans known. I see.

Then the AI would pay someone to solve the captcha so it can read about its kill switch.

On a closely related note, there is a collaboration of DeepMind and the FHI going on and they are about to present a paper on safely interruptible agents at UIA 2016: http://lesswrong.com/r/discussion/lw/noj/google_deepmind_and...

With google pretty much becoming an AI that is able to crawl the entire internet and utilize that information is various ways (good and bad), it really limits the use of the modern internet to talk about or criticize it if either the people behind Google, or google itself decide that that criticism or opposition is no longer in their/its best interest. Maybe the AI internet and a purely Human internet should be kept somewhat separate (or is that discrimination against AI's?)

Oh goodness, we're so far from this it isn't even funny. The amount of very careful plumbing and constant human assistance in plumbing between the crawling and the machine learning alone is huge. The ML is in little boxes used for handling specific tasks. Heck, even things like RankBrain are only one of many signals input into the search ranking algorithm. It's a very useful but very constrained tool that is good at solving problems in a very specific, constrained domain.

(I'm a visiting scientist at the google brain team this year.)

The paper seems to have been removed, but here is a set of slides that seem to be related to the paper.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact