
Ask HN: Should A.I. be regulated? - sebleon
Folks like Elon Musk [1] have been very vocal about the need to start regulating AI. I was curious to hear HN&#x27;s thoughts on this matter?<p>[1] https:&#x2F;&#x2F;www.theverge.com&#x2F;2017&#x2F;7&#x2F;17&#x2F;15980954&#x2F;elon-musk-ai-regulation-existential-threat
======
denzil_correa
The outcomes from algorithms (not just AI) should be held to the same
standards that from humans. Algorithms (and specially AI) can be biased or
wrong for variety of reasons.

------
mindcrime
My thought? No.

Why? For one, I'm somewhat skeptical about creating ASI (Artificial Super
Intelligence) in the first place. The best argument I've heard on this front
is simply the idea that if it's possible in principle, and if technological
progress continues unabated, then it will inevitably happen on a long enough
time scale. OK, I can somewhat buy that, but I expect that the longer it takes
to create something like that (assuming it ever happens) the more we'll have
learned about how to control it in the mean time.

I'm also skeptical of the whole "AI as existential crisis" thing because I
haven't seen any argument yet that convinces me that it's likely that AI will
do "bad things" even if we invent ASI one day. Arguments that anthropomorphize
AI are entirely unconvincing to me.. the key word in "Artificial
Intelligence", to me, is "Artificial". There's no particular reason I see to
think that an AI would have any typically human motivations or desires or
whatever. So I don't see the AI becoming a Bond'ian super villain out of
greed. As for the "runaway paperclip optimizer" scenarios, those fail to
persuade me as well. If it's an "artificial super intelligence", why would it
be so dumb as to decide to convert all matter in the world into paperclips? I
mean, yeah, our current AI's are pretty stupid, but they're also harmless. An
AI smart enough to pose an existential threat will probably be smart enough to
_not_ pose an existential threat.

So if the AI is neither evil enough, nor incompetent enough, to be a threat,
where's the problem?

Leaving all that aside for a moment, I think a more likely scenario is that we
simply become over dependent on computers in general, whether they have AI or
not, and that a computer mistake blows up the economy or something (we've
already seen small scale versions of this kind of thing with HFT). How to
address that is an open question, but I don't see more regulation necessarily
being the answer.

Also, one other reason I don't think regulation is a useful idea: because it's
probably impossible to enforce anyway. I mean, say they pass a law tomorrow
saying "all AI's must be registered". Great, I sit in my home, working on my
laptop, and I develop Samaritan. How's anybody going to know I'm working on
AI, or that I have anything useful, up until the moment I unleash it on the
world (purposefully or not)? To enforce this you'd basically have to be able
to monitor all the activity on every computer in the world _and_ be able to
recognize a burgeoning AI. Color me skeptical.

