
Leave A.I. Alone - gk1
https://www.nytimes.com/2018/01/04/opinion/leave-artificial-intelligence.html
======
alexandercrohde
Just a random op-ed (read: Rando with an opinion that happens to be under the
auspices of NYT) that provides nothing new to the discussion.

Poo-poos concerns of AIs influence on the economy and human safety, with
laughable comparisons.

For example it argues that people don't agree on what AI is therefore somehow
regulating it is premature... (Why would that matter?).

The clarity of thought is this piece is negligible.

So let's clarify:

\- Some people are [rightly] concerned about technology in general. As it
becomes more powerful (knives then guns then nukes) the ability for a few
people to ruin the world for everyone increases.

\- Some people are [rightly] worried about technology's role in concentration
of wealth.

\- Some people are [rightly] worried about existential threats "grey
goo"/black-mirror/robocop/Ex-machina.

The role in AI is that it makes each of these threats much more dangerous
(e.g. putting AI in drones, AI's effect on the uneducated people's
employability, Robocop)

[Author doesn't attempt to provide evidence against any of these claims, but
just argues the concept is used in a vague way therefore somehow doesn't
count]

~~~
camelCaseOfBeer
The takeaway I got was that the author more or less wants the conversation
about regulation of AI to be less of a top-down approach to a broad reaching
and loosely defined topic. The scenario of a, "grey goo" apocalypse verses,
say, a bank hiding otherwise discriminatory lending practices behind a neural
network black-box, are radically different in terms of plausibility and
likelihood. Given the esoteric nature of the topic it's all too easy to see
where this is headed when it comes to representative governance. Knee-jerk
legislative umbrellas based on the comments of eccentric megalomaniacs made to
the news media is a fool's errand, and there's a hell of a lot of foolish
people out there.

~~~
alexandercrohde
Well if his position is there are various real AI threats that need to be
discussed and handled differently, then I'd agree with that position. However
that's not how I read it, particularly given the title.

------
throwawayjava
The article is a poorly argued response to a strawman.

The NYC and federal advisory councils would be tasked with providing lawmakers
with impartial advice on A.I. writ large. That's reasonable. Bias in
algorithmic credit decisions and bias in algorithmic sentencing are different
problems, but share a lot of the same underlying technological
realities/problems. These areas _do_ have cross-cutting concerns, so a
legislative brain trust dedicated to disseminating knowledge about these
cross-cutting concerns is a wise use of resources.

No one is suggesting "regulating" general AI, so the author's red herring
question about "what is AI anyways?" is completely off-topic. Even "alarmists"
like Musk and Hawking are proposing much more concrete legislation, such as
banning algorithmically generated kill orders in weapons. And to the extent
that those people are interested in safe _general_ AI, they're currently
trying to achieve that goal by throwing research money at it (rather than
proposing concrete legislation regulating _general_ AI). Which is hardly a
reasonable thing to criticize!

Furthermore, the proposed approaches in NYC and at the federal level _aren 't_
incongruent with the author's proposed sector-by-sector approach toward
regulation! These are, after all, simply proposing _advisory councils_.

Therefore, the author does not provide a compelling reason to oppose any of
the _concrete_ legislation mentioned at the beginning of the article. And the
arguments that the author _does_ make betray, IMO, a superficial understanding
of the proposed legislation.

------
kiddico
I'm working my way through the "Blueprints for Armageddon" series in hardcore
history. The attempts to regulate AI remind me of the Russian efforts to
regulate war before WW1.

In the first part of BfA (I think) he talks about the peace conferences in
Russia whose goals were to limit the progression of technology. Partly because
Russia wasn't prepared for a high tech war, and partly because large scale
efficient killing was kinda scary. Anyways, they wanted to limit what
technologies could be improved by freezing combat technologies where they were
at the time.

Dan talks about how it wan't really that far-fetched of an idea to to
artificially limit the spread and extension of knowledge at the time. This is
in contrast to today, where it seems impossible for technological advancement
to stop. Tech changes so fast it's difficult to imagine that it'll be slowing
down soon. Could we even slow down the advancement of AI if we wanted to? How?

Is there any dangerous technology whose development could be stopped even if
we saw it coming?

~~~
titzer
IMHO it should be illegal to develop an artificial mind capable of
consciousness. There are a lot of good reasons to avoid this, not just that it
could be potentially unethical to subject such a mind to torture/pain.

A super-intelligent artificial mind is probably going to come up with its own
agenda and convince us to work in its interest and potentially against our
own. Given the state of computer security today, can you imagine how easily it
could spread as a worm?

------
gaius
The AI industry regulates itself by periodically going bankrupt

------
titzer
About the author: "Andrew Burt is chief privacy officer and legal engineer at
Immuta, a data management platform for data science, which enables companies
to create and manage A.I. and machine learning models."

Hmm.

------
ImSkeptical
>in the summer of 1970, Newsweek ran a cover story titled “Is Privacy Dead?”
detailing the “massive flanking attack” of computers on modern society.
Growing awareness of that threat led to broad appeals that echo modern
proposals to regulate A.I. “Eventually we have to set up an agency to regulate
the computers,” Senator Sam Ervin of North Carolina said in 1970.

>But instead of regulating all computers, the government sought a targeted
approach tailored to specific problems, passing regulations like the Equal
Credit Opportunity Act in 1974.

Thereby solving the problems of digital intrusions on privacy, and credit
scores forever. Oh wait, no, I guess those both those things are incredibly
broken. Digital privacy is notoriously non-existent, infringed on by the
state, criminals, and business. Credit and financial information is impossible
to keep safe and available to any cyber criminal who wants it.

It's odd that the author's two examples of when the government got something
right by not preemptively intervening both turned out so poorly.

AI is fundamentally different from privacy, credit, or any other invention
that has happened before. The implications of this technology are too vast for
us not to spend time and money thinking about how to proceed safely. We may
only get one shot at developing true AI correctly, and establishing a public
office to help figure out how to do that seems like the least we could do.

If anyone finds themselves agreeing with this author, I highly recommend Nick
Bostrum's book Super Intelligence. If you don't have the time for a whole book
Sam Harris has a good TED talk on the subject.

~~~
titzer
> Credit and financial information is impossible to keep safe and available to
> any cyber criminal who wants it.

Which is why legislation should be aimed at preventing law-abiding entities
from ever storing it in the first place. Obviously, criminals cannot steal
what is not there.

Enforcement of these laws has been mild, and all companies and criminals know
this. Only the odd major embarrassment seems to motivate companies to protect
user data.

------
brndnmtthws
'AI' is such a meaningless buzzword that you could simply rename a product to
sidestep any pointless regulation.

~~~
mistercow
Regulations generally start out by defining the thing they're regulating. You
can call it whatever you want, but if it fits the law's definition, it's going
to be covered.

~~~
brndnmtthws
Sure, but pretty much all software qualifies as 'AI'. However most people
think about Hollywood interpretations of AI, which is a thing that doesn't
exist. It's strictly a matter of interpretation, and figuring out where on
this spectrum a particular piece of software exists is basically the
equivalent of throwing darts at a board.

~~~
psyc
In my private taxonomy, hello world through Excel are weak AI, the state of
the art mostly narrow AI but fast becoming more general, and the endgame is
general AI.

~~~
brndnmtthws
What worries me is the 80 year old judges (and other clueless participants of
the judicial system) who don't understand how computers work and will
ultimately be responsible for deciding people's fate regarding this stuff.

