
Artificial intelligence could leave half the world unemployed - jonbaer
http://www.theguardian.com/technology/2016/feb/13/artificial-intelligence-ai-unemployment-jobs-moshe-vardi
======
YeGoblynQueenne
>> “Humanity is about to face perhaps its greatest challenge ever, which is
finding meaning in life after the end of ‘in the sweat of thy face shalt thou
eat bread’,” he said. “We need to rise to the occasion and meet this
challenge.”

In the past that wasn't a problem. I'm thinking of classical Greece where the
majority of the work was done by slaves and free citizens were, well, free, to
better themselves. The result was an unprecedented flourishing of the arts,
science and philosophy and basically the birth of the western civilisation. In
later periods, the Rennaissance, the Industrial era and so on, there was also
great progress in the sciences and also in political philosophy, and this was
achieved by people who didn't have to labour in the fields for a living.

So there should be no question that humanity is indeed able to better itself
once it's relieved by the need to toil everyday to sustain itself.

The real question is: will increased automation really mean a high enough
standard of living for everyone? So far, automation just means that the work
that is done by humans is less fulfilling, rather than more: macjobs, call
centers, warehouse work, retail... hardly intellectually rewarding work.
Instead, that kind of work crushes your spirit and leaves you no time or
energy to pursue your interests and educate yourself or even exercise your
body to improve your health.

I do believe there is cause for concern. My concern is that in the future, the
majority of work will be menial, meaningless, unproductive tasks where the
human worker is just a cog in a huge automated machine of which he or she has
no control. I fear we'll just become a lesser, less well paid kind of mindless
drone, alongside our wonderful machines.

And as to those machines I think the big risk is not that automation will
become better than humans at doing jobs. The real risk is that automation will
reach a mediocre level, even possibly a very low level of competence, but it
will be employed anyway because of its competitive pricing.

In other words, I see it as very plausible that even today's experts will be
slowly replaced by incompetent, inexpert machines that will do the work that
humans do today, for much less money and at a much lower standard.

If we're ever so dumb as to hand over our future to our machines, at least the
ones we have now, the end result will be misery and unhappiness for all.

------
YeGoblynQueenne
Now, about our wondrous machines and algorithms. This needs a different
comment.

>> Selman, a professor at Cornell University, said: “Computers are basically
starting to hear and see the way humans do,” thanks to advances in big data
and “deep learning”.

I'm a student of AI. I'm studying for an MSc in Intelligent Systems at the
University of Sussex in the UK. I come from a classical AI background, with
studies in discrete maths, logic and logic programming in particular. I've
studied machine learning and to a lesser extent neural networks. I'm familiar
with deep learning.

This limited knowledge nevertheless gives me the stenght of conviction to say
that, no- the quoted comment is not a good representation of the state of the
art. AI systems are still far from perceiving the world in "the way humans
do".

Machine learning is a powerful tool, but it has severe limitations. Deep
learning is certainly not magic and neural networks in general are "just"
optimisation (except we're optimising systems of functions). I really don't
see how any technique that we have at this point in time can be used to
produce machines with anything approaching human levels of competence in the
complex tasks that humans are good at.

Sure, we have machines that can drive cars or play Chess or Go competently.
However, those are very clearly delineated tasks. Chess and Go are games with
well-known rules in a restricted domain. Driving is also a clearly strictly
defined domain - for instance, you won't see Google car driving offroad; my
theory is that if there's no roads or signs to guide it it would not be able
to navigate on its own. I might be wrong on that.

The important thing is that although machine learning algorithms can be
trained on any conceivable task, and the same algorithm can often be trained
on a large variety of tasks, changing tasks requires new training. This
training requires resources that scale, very often exponentially so, with the
complexity of the task. Training a Chess-playing AI requires a lot of data and
computational power. Training a Go-playing AI requires a lot of _different_
data and another lenghty period of training. For a game where not enough data
is available, it would be difficult to obtain good results from machine
learning. Machine learning itself has not given the best results in
competitive AI.

Humans on the other hand excel at generalising from very few examples and can
often pick up a new task with minimal training. For instance, from playing
football to playing basketball, or from playing Magic: the Gathering to
playing Poker. We excel in very different types of tasks: competitive tasks,
collaborative tasks, games, survival, commerce, anything you can think of.

I don't see how machine learning will ever be able to accomplish such feats,
no matter how competent individual algorithms can become in specific tasks.

