
There is a blind spot in AI research - _lm_
http://www.nature.com/news/there-is-a-blind-spot-in-ai-research-1.20805
======
hyperpallium
I wish HN discouraged clickbait titles, even when the article itself uses that
title. One solution is to append the answer to the original title, separated
by a "|", as used by:
[https://www.reddit.com/r/savedyouaclick/](https://www.reddit.com/r/savedyouaclick/)

For example:

 _There is a blind spot in AI research | Auto­nomous systems are already
ubiquitous, but there are no agreed methods to assess their effects_

If it's interesting, _I 'll still click_.

~~~
goodplay
I second your proposal.

While I never post a comment on an article before reading it, I almost always
read the comments before reading the article to avoid click baits. An brief
one-line summary that serves the same role as an abstract would eliminate the
need to read the comments first.

It might even help eliminate clickbaity titles in technical articles
altogether.

------
Houshalter
This is vastly overblown. People already vastlh distrust algorithms.
Psychologists have studied it and called it "algorithmic bias". That when
given a choice between computer and human, even when the computer makes much
better predictions, people distrust it.

In almost every domain where there is data and a simple prediction task, even
really crude statistical methods outperform "experts". This has been known for
decades. Yet in almost every domain algorithms are resisted. Because people
distrust them so much, or fear losing their jobs, or all of the above.

But humans are vastly more biased. Unattractive people get twice as long
sentences. People heavily discriminate based on political denomination. Not to
mention race or gender. Judges give way harsher sentences when they are
hungry. Interviews negatively correlate with job performance.

Humans are The Worst. Anywhere they can be replaced with an algorithm, they
should be.

The referenced propublica result has been criticized here:
[https://www.chrisstucchio.com/blog/2016/propublica_is_lying....](https://www.chrisstucchio.com/blog/2016/propublica_is_lying.html)
"almost statistically significant"

~~~
lqdc13
Why do humans distrust "algorithms"? Maybe they had past experiences where
algorithms behaved worse than humans?

A recent example from recently: facebook replaced human-curated news with
machine-curated, they started trending fake news [2].

Another example is algorithms that try to help you during automated phone
calls, hence people always try to get to a human. This is because the speech-
to-concept parsing/mapping is flawed or because they're not programmed to
perform some specific tasks.

Another example is self-driving cars. Google cars have been involved in more
accidents per mile than average humans[1].

In general, people's intuition is built through repeated encounters, that's
why it is so great.

[1] [https://www.bloomberg.com/news/articles/2015-12-18/humans-
ar...](https://www.bloomberg.com/news/articles/2015-12-18/humans-are-slamming-
into-driverless-cars-and-exposing-a-key-flaw) [2]
[http://www.theverge.com/2016/8/30/12702478/facebook-
trending...](http://www.theverge.com/2016/8/30/12702478/facebook-trending-
topics-fake-news-megyn-kelly)

~~~
titzer
From [1],

> Driverless vehicles have never been at fault, the study found: They’re
> usually hit from behind in slow-speed crashes by inattentive or aggressive
> humans unaccustomed to machine motorists that always follow the rules and
> proceed with caution.

Which completely negates the point you are trying to make.

~~~
lqdc13
The point is they are involved in more accidents. The rest is spin.

The goal is to have the robot-driven car survive in a human world and perform
in this world better than humans, not the other way around.

------
Itsdijital
Look at Tesla with it's "autopilot" feature. It's not really a true autopilot,
more just a driving assist, but people treat it like such. I think it's easy
for people to fall into the trap of relying really hard on something that is
shiny, new, and works well despite being imperfect - even if it is explicitly
stated to be.

Nano tech has a similar problem at hand. There are indications that
nanoparticles could have serious health related issues. Despite this
researches are pushing ahead full steam with bringing nanotech to market. The
money going to development far exceeds whats going to test safety. In an AMA
with a nano materials researcher I asked if he ever has concerns about the
safety of what he is making. His response was along the lines of "Sure I do,
but it's not my job to deal with that. I just get paid to develop the tech."

Tech development has always had a shoot first ask questions later approach.

~~~
quickben
No, it was marketed as an autopilot in the first place.

------
ggchappell
Key quote:

“People worry that computers will get too smart and take over the world, but
the real problem is that they’re too stupid and they’ve already taken over the
world.”

\-- Pedro Domingos, in _The Master Algorithm_ (2015)

~~~
ktRolster
It's an entertaining quote, I laughed, but I think everyone who works in AI
already knows this. It's not a blind spot.

~~~
blazespin
Right, try saying that the next time we have a flash crash that really kills
the markets.

There should be an addition to the statement though, which is "Unfortunately,
Humans en masse are even stupider."

~~~
ktRolster
Flash crashes don't seem to kill markets. Too many people realize it's an
opportunity, and quickly buy.

 _Also worth remembering, the stock market is not the economy_

------
matco11
Key thought: AI's social and cultural impact, in other words AI's implications
on "social mobility" (your rags to riches stories, the American dream).

The authors are asking: in a hypothetical world where many decisions are AI
assisted, what is the risk that AI systems slow social change because they are
too dumb to understand exceptions, peculiarities, positive externalities? What
can we do to establish parameters that will allow us to know when a certain AI
system is trained well enough to be used in real world, with minimal risk of
undesired social and cultural implications?

------
visarga
Systematic analysis of AI biases is certainly needed. We train models on data,
but how is the data collected, and how biased is it? At least in AI we can
compensate for biases, but in human society they are much harder to counter.
There's hope for a better future if we can make fair AI.

~~~
mcguire
AI biases come in new and unique varieties.

" _As another example, a 2015 study9 showed that a machine-learning technique
used to predict which hospital patients would develop pneumonia complications
worked well in most situations. But it made one serious error: it instructed
doctors to send patients with asthma home even though such people are in a
high-risk category. Because the hospital automatically sent patients with
asthma to intensive care, these people were rarely on the ‘required further
care’ records on which the system was trained._ "

------
Animats
Scary robot video of the month.[1] This is not just a blind, repetitive
operation; previous X-rays and laser scans tell it what to do.

[1] [https://youtu.be/MZIv6WtSF9I?t=245](https://youtu.be/MZIv6WtSF9I?t=245)

~~~
goodplay
As morbid as it may be, I wonder if that system can tell the difference
between a lamb and a human of comparable size. The system might still able to
identify (or misidentify) all targeted joints.

------
SFJulie
With or without AI we already have issues with too much automation/assistance,
and it get bad when automation they are failing/lack of maintaining.

Basically if you can drive a manual car, it is easy to drive an automatic one,
but the opposite is not true.

Old GPS and even new one got 15% of the time the address wrong when I was a
mover. Not only GPS failed, but what do you do when you have no usable maps?

Well, we have fired the people doing the maps, they are hardly updated at the
pace mayors and real estates promoters are changing the territory, if you have
an awesome GPS with no updated maps your GPS is useless, no?

We are forgetting to do the heavy underlying costly maintaining of maps,
directions, forming drivers to read signs figuring GPS made them obsolete. Now
we have to maintain: maps, satellite, computers and to live with people unable
to use a map and a compass that are distracted when they drive by potentially
wrong information and to dumb to read the sign saying there are entering a one
way street in counter sense relying on their GPS.

Then too, the automation in Airbus/tesla and Boeing have proven to be less
valuable then pilots' experience when computers fail due to false négative
(frozen Pitot probes) or false positive (sun blinding cameras). I think civil
and military records about accidents are a nice source of information about
"right level of automation".

The problem is keeping up to date workers requires constant, heavy practice
without too much automation. And human time nowadays is expensive.

That is one of the reason France (at the opposite of Japan) kept automation in
nuclear plant rudimentary. Because when a system is critical, you really
prefer human that can handle stuff at 99.999% than a computer that do great
100% of the time if and only if its sensors do works or nothing too
catastrophic happens (flood, tsunamin, earthquake)

The problem is industry wants to spare on costly formations and educations
(not the one from the university, I mean the one that is useful) but knowledge
you have not yet crafted because of change of circumstances (I will be
delighted to see how self driving car are behaving in massive congestion with
dead locks) will be hard to program if we lose the common sense of doing the
stuff by ourselves. How do you correct a machine misfunctioning to do
something you have forgotten to do correctly yourself? You may even ignore
when it will fail. Not because of it, but because of your lack of referential.

------
cs2818
I'm not too sure how practical the suggested "social-systems analysis"
approach is. It is summarized as:

"A practical and broadly applicable social-systems analysis thinks through all
the possible effects of AI systems on all parties."

which seems incredibly difficult to do completely. Hopefully the authors will
further describe their approach in future publications.

Also, somewhat of a nitpick, but the article states:

"The company has also proposed introducing a ‘red button’ into its AI systems
that researchers could press should the system seem to be getting out of
control."

in reference to Google, but cites a paper which discusses mitigating the
effects of interrupting reinforcement learning [0]. The paper makes a passing
reference to a "big red button" as this is a common method for interrupting
physically situated agents, but that is certainly not the contribution or
focus of the work.

[0]
[https://intelligence.org/files/Interruptibility.pdf](https://intelligence.org/files/Interruptibility.pdf)

------
ccvannorman
This is another angle of the "Weapons of Math Destruction" argument, and it
looks very relevant. Those who work on big data (esp. public sector) would be
wise to consider the implications.

------
Animats
For a good analysis of the problems of living with AIs, read the web comic
Freefall.[1] This long-running comic has addressed most of the moral issues
over the last two decades. Of course, that's about 3000 comics to go through.

[1] [http://freefall.purrsia.com/](http://freefall.purrsia.com/)

------
frozenport
The problem is the AI community is treating itself like non-experts.
Explaining that AI needs to be controlled by telling horror stories of robot
domination is good to motivate research work to lay people, but is a
distraction for professionals.

~~~
ChoHag
So the "robots are going to kill us" meme is good because it makes "the rest
of us" interested in the idea of AI?

~~~
frozenport
Kinda, but if you treat your own people (the AI community), like "the rest of
us" \- well then you disrupt real scientific work.

