
OpenAI makes humanity less safe - lainon
http://lesswrong.com/r/discussion/lw/oul/openai_makes_humanity_less_safe/
======
mindcrime
Meh. You can dream up all sorts of hypothetical scenarios with minimal
support, and raise the alarm around them (see: "the LHC will create a
blackhole that swallows the world"). But right now, there seems to be very
little reason to think that "artificial superintelligence" would be dangerous
if it existed, and (probably more importantly) even less reason to think that
ASI is anywhere remotely close to existing. Seriously, for all the cool things
modern "AI" systems can do, they're still pretty stupid. Look up "commonsense
reasoning" and look at some of the problems that those folks are working on,
which computers still can't solve.

"Now", you might argue, "didn't AlphaGo just beat the world's best human at
Go, and didn't DeepBlue beat Kasparov ages ago? Isn't that evidence that
computers are getting pretty intelligent?" To which I'd say no... it's
evidence that computers have gotten good at playing chess and Go. But note
that the same program that won at chess did not win at Go, and the program
that won at Go isn't doing natural language understanding, etc. IOW, we have
solved very isolated and highly specified problems, but we really haven't come
up with much in the way of a general intelligence in a sense that even
remotely approaches a human.

And even if I'm completely wrong and ASI is imminent and is potentially
dangerous, there's still no particular reason to think that OpenAI is making
things worse, as opposed to better. There certainly is a strong argument that
anything that democratizes access to advanced AI techniques helps mitigate the
(potential) danger of one bad actor inventing an ASI and having control of the
only one, and using it towards malicious ends.

