
‘Artificial Intelligence’ Was the Fake News of 2016 - zhengiszen
http://www.theregister.co.uk/2017/01/02/ai_was_the_fake_news_of_2016/
======
YeGoblynQueenne
What a badly edited article. If this is the counterpoint to the hype there's
no wonder the hype train is currently slingshotting around Neptune.

Still, the points they're trying to make are fair, especially this bit:

>> Out in the real world, people want better service, not worse service; more
human and less robotic exchanges with services, not more robotic "post-human"
exchanges.

And that's the risk to AI's future in a nutshell: artificial stupidity has the
potential to make our lives miserable like natural stupidity never has.

And that's no mistake. The danger in AI is not that it will become smarter
than ourselves and take over the world (where Pinky and the Brain failed). The
risk is that it will remain dumber than cabbages and still take over the
world, as we fall over ourselves to automate everything humans do, even the
things that don't get better with more automation, until we're all left
without a job, standing in a long line to speak to an AI assistant to get our
food stamps for the AI food dispenser.

Just an hour ago I was on the phone with a public service. The answering
system's speech recognition worked just fine- it parsed my 11-digit Customer
Reference Number despite my Greek accent. It did the job better than a human-
those 11-digit numbers are not meant for humans anyway! Still, it kept asking
me to press numbers to choose options- why didn't it ask me to speak those
aloud? And at the end of the line, when all the options ran out, and only
then, it let me know that the offices were closed for the holidays and I
should call again tomorrow- Byyye!

Is that natural, or artificial, stupidity? One way or another, it's a picture
of the future: imagine a 1000-page form in triplicate, stamping on a human
face, forever.

------
laurent123456
> For example, how many human jobs did AI replace in 2016? If you gave
> professional pundits a multiple choice question listing these three answers:
> 3 million, 300,000 and none, I suspect very few would choose the correct
> answer, which is of course “none”.

It's probably very hard to evaluate the impact of AI, but their answer is not
any better (nor sourced) than that of the professional pundits they criticise
(by the way, I can't find any article of anybody claiming that jobs have been
lost due to AI). Rubbish theregister article as usual.

~~~
brudgers
The way in which technology replaces jobs is not usually explicit. Cell phones
and email and automated voice mail systems replaced a lot of receptionists.
Desktop computers replaced pretty much all the typists in the typing pool and
a many secretaries. Smart phones replaced a lot of the secretaries that were
left.

The thing about AI is that it doesn't seem like AI in the rear view mirror.
Grammar check and voice recognition don't seem like a big deal twenty years
later. But it was science fiction in real life twenty years ago.

~~~
yati
> The thing about AI is that it doesn't seem like AI in the rear view mirror.

This. Related link:
[https://en.wikipedia.org/wiki/AI_effect](https://en.wikipedia.org/wiki/AI_effect)

------
mike_hearn
I disagree. Have been reading a lot of AI papers in recent weeks.

 _" What we have seen lately, is that while systems can learn things they are
not explicitly told, this is mostly in virtue of having more data, not more
subtlety about the data. So, what seems to be AI, is really vast knowledge,
combined with a sophisticated UX," one veteran told me._

Why be anonymous in such an article? You aren't leaking state secrets. I'd
like this 'veteran' to explain to me why recent algorithmic advances like the
DNC or recurrent entity networks are not "more subtlety about the data".

The last few years have seen major improvements in the underlying neural
network algorithms alone, like the addition of 'neural memory', big advances
in training of neural networks that have loops in them and so on. The major
progress in fields like question answering, story comprehension and game
playing aren't simply about using old techniques with more data. They do
reflect significant theoretical advances too.

It's not a good article, just clickbait by Orlowski of the traditional sort.
His final point about liability is so vague it could apply to industrial
robots or autopilots too, somehow they manage to exist just fine.

~~~
YeGoblynQueenne
Well, Geoff Hinton is on record saying that the reason ANNs are working now,
where they didn't in the past is more to do with having more data and bigger
computers and less with algorithmic advances (though these did help) [1]:

 _AMT: Ok, so you have been working on neural networks for decades but it has
only exploded in its application potential in the last few years, why is that?

Geoffrey Hinton: I think it’s mainly because of the amount of computation and
the amount of data now around but it’s also partly because there have been
some technical improvements in the algorithms. Particularly in the algorithms
for doing unsupervised learning where you’re not told what the right answer is
but the main thing is the computation and the amount of data.

The algorithms we had in the old days would have worked perfectly well if
computers had been a million times faster and datasets had been a million
times bigger but if we’d said that thirty years ago people would have just
laughed._

As to the papers you mention- researchers publish papers and in them, they
make claims. That's their job. The job of anyone who reads a paper is to take
it in good faith but retain their critical and skeptical stance throughout. If
you're not directly involved with ANN research, or implementing your own ANN
systems for some commercial application, it's very hard to know whether a
claim in a paper is useful to anyone besides the scientist who published it.

________________

[1] [http://techjaw.com/2015/06/07/geoffrey-hinton-deep-
learning-...](http://techjaw.com/2015/06/07/geoffrey-hinton-deep-learning-in-
baby-steps-and-the-future-of-google-and-ai/)

~~~
Eridrus
There are two ways to interpret what Hinton says:

a) the algorithms are unimportant because all that we needed was computation

b) the algorithms were ahead of their time

I think that it is actually the latter that is true; in the same way that
electricity was a precondition for computation. Babbage designed the
difference engine and the analytical engine before they could realistically be
implemented, this doesn't mean the design was unimportant.

------
stephengillie
AI isn't replacing grocery store checkers. It hasn't replaced auto mechanics
or professional chefs. We barely have self-driving cars and trucks, and we
don't even have robots to stock store shelves yet. Where is the brake-changing
robot? Where is the robot that puts cans of beans on the Walmart shelf? Would
humans even value such an automaton?

There will always be some problems that are trivial to have a human solve, but
very difficult for anything else. The Butlerian Jihad, and the underlying fear
that robots will take every single job, is as much science fiction as hard
light or teleportation.

~~~
gcr
A robot absolutely replaced my cashier. Now I can scan the barcodes, insert
the item into the bagging area, swipe credit card, etc. all without
interacting with a person.

Sure, it's not AI. It's not a deep reinforcement learning neural network
trained with Nesterov subgradient momentum.

But robots are robots. Whatever it is, my cashier's out of a job.

~~~
stephengillie
Last time I used self-check, I had to wait for another human to verify my age,
to purchase beer. The time before, I had to wait for a human to verify I had
bagged all of my items.

I was a grocery store checker in 2007, when these devices were first
announced. They are as capable as they were 10 years ago, handle the same edge
cases as 10 years ago, and require the same human intervention as 10 years
ago. And there are still human checkers at every store - usually they "turn
off" the self-check at night, since the human that watches it goes home.

All this AI is just a "mechanical Turk" \- using AI to perform most of the
tasks a human doesn't want to, and keeping a human around for the edge cases
that are too expensive to automate. Humans have such a robustness of variety
and ability that we can quickly solve most issues. After a certain point, it
just makes sense to keep a few humans around for the really rare edge cases,
like the when the grocery store is flooding.

~~~
chillacy
> I had to wait for another human to verify my age, to purchase beer. The time
> before, I had to wait for a human to verify I had bagged all of my items.

So instead of 2 people to a register (one to bag, one to scan), you can have
one person man 12 self checkouts? Sounds like an amazing reduction in labor.
Even if self-checkout were half as fast as traditional checkout, that one
employee is effectively serving way more customers per hour than they would be
doing traditional checkout.

------
lngnmn
Not AI, but so called Deep Learning meme.

AI has been already well-researched in 90s.

The improvements in classification algorithms and the early commercialization,
such as Siri or FB face recognition are responsible for the current bubble.

No major breakthrough in AI has been made so far. Better classifiers is not
AI. They are mere pattern recognition.

~~~
adrianN
In science and engineering you almost never have a "major breakthrough" where
some radical change happens over night. Progress is made by slow incremental
improvements. Only from a historian's bird-eye view can you make statements
like "the steam engine started the industrial revolution". Steam engines
existed before James Watt. He just improved them a little to make them
commercially viable.

AI in particular suffers from poor definitions. Nothing computers can do is
"AI", only the things that are still impossible can be called "AI". Doing
arithmetic was once thought to be a sign of intelligence. So was playing
Chess, or Go, or recognizing faces. Now that computers can do this easily it
is no longer considered difficult.

So while the recent progress with neural networks is likely somewhat overhyped
you shouldn't discount the importance of making things commercially viable. In
the 90s we had software that could do image recognition, true, but it wasn't
accurate enough, general enough, fast enough to be useful in practice. This
has changed over the last ten years to the point where image recognition has
tons of commercial applications.

~~~
marcosdumay
> Only from a historian's bird-eye view can you make statements like "the
> steam engine started the industrial revolution".

I'm always baffled by that example. The Industrial Revolution was already
ongoing when the steam engines become practical.

~~~
archgoon
Seems like it's unintentionally apt then ;)

------
ragebol
I'm leaving my AI gloves and scarf in the closet for a little longer.

I don't think there is a need for 'real AI' in order to replace a lot of
humans. Just normal programming in the hands of non-tech office workers can
already reduce a lot of human labor.

------
yalogin
I feel like this is too cynical of the deep learning and AI phrases. I agree
that so far the only thing deep learning has given us is advanced correlation
of data to sell us stuff better. Google knows this and so it has not hyped up
the self driving cars. Tesla on the other hand has no ads to sell us and had
to sell cars for which it had to force the AI concept on us. I am cynical of
self driving cars however becoming a reality in the near future. Of course
there is probably a middle ground somewhere. The article seems to say the ad-
people-tracking-tech and the more pure efforts like self driving cars are the
same.

------
scientist
Ironically, on the right of this article there is a link to another article
from the same source, entitled "Top of the bots: This AI isn't a cold, cruel
killing machine – it's a pop music hit machine" [1]. Perhaps the link was also
placed there by a machine learning / AI algorithm...

[1]
[http://www.theregister.co.uk/2016/11/11/ai_pop_music_maker/](http://www.theregister.co.uk/2016/11/11/ai_pop_music_maker/)

------
tim333
Bit of an iffy article arguing "2016’s AI hype will begin to unravel in 2017"
because roughly:

3) When something goes wrong, such as a car crash... who do you put in jail?

2) "The Consumer Doesn’t Want It" \- eg clippy, automated call centers

1) "AI is a make believe world populated by mad people, and nobody wants to be
part of it" \- it's all a cult

Which is all a bit silly - 3) you don't put anyone in jail, you work on the
algorithms to reduce our existing 1m+ road deaths a year 2) There's some crap
but people like Amazon Echo and I want my self driving car and 1) hardly
dignifies a reply.

Meanwhile in 2016, computers mastered Go, the tools became good enough that
the likes of Zuck and GeoHot could make somewhat functional Jarvis like
assistants or self driving cars in a few months hacking and IDC are predicting
the "AI Market is projected to grow from $8 billion today to $47 billion by
2020."

~~~
maglavaitss
I wonder if you'd be this dismissive if an autonomous car kills your parents
or children, and you'll still advocate "don't put anyone in jail".

Just wondering.

~~~
tim333
Probably not but statistically I'm vastly more likely to be pissed off by a
human driver killing them. I think in the last year the numbers killed were
approx a million by humans, one or two by self driving cars. If the percentage
killed by self driving cars gets over 0.1% I'll worry about it.

------
basicplus2
Oxford dictionary

Intelligence...

"the ability to learn, understand and think in a logical way about things; the
ability to do this well"

From this it seems to me that AI is not possible without self awareness which
surely must be a prerequisite for understanding.

~~~
tempodox
Ferengi Rule of Acquisition #239: Never be afraid to mislabel a product.

------
baking
>Here I’ll offer three reasons why 2016’s AI hype will begin to unravel in
2017.

>3\. Liability: So you're Too Smart To Fail?

Seems like there is some missing text?

~~~
ganeshkrishnan
1\. is underwear gnomes.

2\. is profit.

