
How Far to AI Utopia? - nefitty
http://www.milesbrundage.com/blog-posts/how-far-to-ai-topia
======
Houshalter
There are two different types of discussions that happen about AI. They are
very different but often confused for each other.

The first is the very near future, 10-20 years or so. Machine learning is
rapidly advancing and we are getting all kinds of cool toys. Automating things
that previously weren't automatable. 90% of jobs probably could be automated
soon, and there will be huge societal consequences. That's what this article
is about, mostly.

The second is far more interesting. Strong AI. The distant future of 30 years,
and maybe sooner. I think it's very likely this will happen in our lifetime,
though obviously it's impossible to know for sure.

This isn't just replacing a few jobs with machines. This is replacing _all_
humans with machines. When we get AIs that are just as smart as humans. And
eventually _far_ smarter than humans. AIs which are so far beyond us that we
look like chimps.

We would not be able to compete with the AIs. They would be able to get
whatever they want. Humans would no longer be in control. It's very, very
important that we make sure the AIs we build want what we want. Then we can
get a utopia so advanced we can't even imagine it. But if we get it wrong we
will just be replaced or destroyed.

And we haven't the slightest idea how to do this, or even if it's possible.
Creating an AI which wants to do what it's human creator wants it to do is a
remarkably hard problem. Especially if that AI is much more intelligent than
it's human creator. This is the so called "control problem".

~~~
jahnu
> The second is far more interesting. Strong AI. The distant future of 30
> years, and maybe sooner. I think it's very likely this will happen in our
> lifetime

Why? I'd genuinely like to know. I've yet to read anything to convince me this
is true.

~~~
sapphireblue
The progress in machine learning/deep learning for last 5 years has been
literally staggering. Read about new state of art results. There is already
superhuman visual object recognition, voice recognition, good machine
translation, description of scenes in natural language, question answering
(facebook babi dataset), visual question answering, algorithm learning (neural
turing machines, neural GPUs), and finally really strong reinforcement
learning (deepmind's DQN, A3C algorithms).

The last one, reinforcement learning, is really important because it's a basis
for general intelligence. By rewarding/punishing a reinforcement learning
agent you can train it to execute very complex tasks, for example navigating a
maze with pixel input or controlling a robot body to do some tasks. DeepMind
is on the cutting edge of this kind of research, you can read about their
latest agent here
[http://arxiv.org/abs/1602.01783](http://arxiv.org/abs/1602.01783)

With such rate of progress it's not inconceivable if a sub-human but general
intelligence could be developed in 10-20 years, or even sooner.

~~~
erikpukinskis
Do you not recognize how weak this inference is?

"We are going faster and faster so we will get there" is not the way you
calculate the likelihood of reaching a destination within a certain time.

~~~
monkmartinez
>"We are going faster and faster so we will get there" is not the way you
calculate the likelihood of reaching a destination within a certain time.

Why not? With incomplete information, you can not say the inverse is true
either.

At the end of the day, most of us have no direct control over the AI
situation.

~~~
erikpukinskis
I think a better approach would be to start with a more opinionated model of
cognition, and then look at how the space defined by the model is getting
coverage in research.

"The brain is a Von Neumann machine and our ability to model it tracks with
Moore's law" is demonstrably false. There are better models than that.

------
d33
Just curious, do we have ANY examples of AI actually improving our lives today
that we couldn't write off as examples of privacy invasion (like Facebook's
face detection)? I'm getting the impression that it's not the AI that we
should expect the utopia to appear from - instead, I put a lot of hope in
microeconomies created by services like Uber or Airbnb that drive prices down.
If only those were free and decentralised...

~~~
adrianN
You'd be hard pressed to find a definition of AI that people can agree on that
existing systems fit.

I'd count the stuff Google does to bring me relevant search results AI. That
improves my life a lot. Automatic sorting of letters by destination is also
pretty useful, as are automatic quality checks in factories. If you count
optimization as AI, there are many applications, for example in chip design,
that improve your day-to-day life a lot.

~~~
monkmartinez
Your first sentence nails it; from AI is nothing short of the Borg to AI being
smart factories that are XX% automated or Google searches... agreeing on the
definition is problematic.

Some thoughts:

I think we have automated a great deal of our lives and automation is growing
exponentially at present. Once we have achieved some critical breakover in
automation, I believe we will start to see unification systems. These
unification systems will bind the various forms of automation relegating human
input to a bygone era. That is, the unification system will learn, optimize,
and execute the "complete stack" of various industries... Think product
distribution from factory to dock to end user.

I don't think this type of automation is that far off. Is that "AI"? To me it
is and I suspect most people outside of HN will call it something along those
lines. When some kind of AI like Varuna in the novel _Influx_ by Daniel Suarez
hits the streets people will not give one shit if you call it AI or God...
they will be very, very afraid of it.

------
awgneo
"It’s not my view that AI is sufficient to realize utopia, though it may be
necessary." Necessary when 93 out of 100 people on the planet currently lack a
college education? I feel like the technical elite, in their submission to
capitalism, are giving up on the prospects of humanity.

~~~
sapphireblue
I don't have a college degree (that doesn't mean I don't have lots of
technical experience though, including some machine learning at amateur level)
and I'm not from the 1st world, but I disagree with you on that matter. I
don't see how education will help us reach a truly utopian future. It looks
like the developed world has pretty much hit the ceiling of what's possible
for society controlled by (educated) humans. While I find 1st world an overall
nice place, it is in no way an utopia: the day-to-day experience for average
person isn't that good neither long- nor short- term. If that's the best we
can do with education, we should probably take another path instead of
repeating the same mistakes done by the developed world

It looks like the main problem of modern world is complexity. The
infrastructure (physical, social, IT) has become too complex for mere humans
to control. Humans are slow, learn slowly, have limited attention spans,
limited capability of understanding complex nonlinear systems (e.g. markets,
social dynamics). IMHO the world is currently in the state of severe deficit
of attention. There is too much problems on all levels, yet humanity as a
whole has only so much attention to apply to these problems, so only the
absolutely critical issues are solved (e.g. 2008 financial crisis or flint
water scandal) while everything else is neglected until it becomes critical.

Machine Intelligence (both Machine Learning and more general AIs, e.g. agents
based on unsupervised+reinforcement learning) could give intelligent attention
to every big and tiny problem there is: in politics, society, infrastructure,
healthcare, etc. on Instead of having to deal with a constant stream of crises
AI systems could prevent them from occurring before the catastrophical event
took place, just like with preventive medicine. That and automated
manufacturing & distribution could give way to a really pleasant life
experience.

Of course it sounds too easy to be true. Looks like the biggest obstacle to
realizing this variant of utopian future is politics. Human primate is
naturally hierarchical, and these instincts influence our society & way of
life profoundly. Modern society looks like a pyramid with wealthy & powerful
on the top, a docile (but shrinking) middle class in the middle and controlled
poor at the base. The people on top and in the middle are quite well off, and
they are afraid of losing power (even to machines, even if there will be a
mathematical proof that everyone will be better off after we delegate decision
making to AI agents).

It looks to me that from the perspective of the rich & powerful there really
is no need in automation/AI: why do they need robots when they are used to
hiring poor people to serve them? Why do they need automation when with the
power of money they can order thousands of people to manufacture any
product/experience for them? That's the status quo that will have to be
disrupted to have AI/robot-based automated utopia for everyone, IMHO.

~~~
visarga
> why do they need robots when they are used to hiring poor people to serve
> them?

It's not going to matter what they need. The technology advances with a rhythm
of its own. It has done so for hundreds of years, predictably. Not even big
wars were able to put a dent in its pace. Just like Google replaced libraries
as our source of information, Napster affected the music industry and social
networks affected news, there's going to be an AI transition and it will
happen no matter what. It's important to keep access to AI open and not place
its power just in the hands of a few.

It will be the continuation of the mobile phone era. Mobile phones with search
engines and social networks empowered billions of people, they are their main
communication device, some don't even have a TV or laptop, but they still have
a phone. All people got to have phones, and all our modern phones are amazing
compared to the best phones from a decade ago. What technology has given to
everyone not even money could buy a mere 10 years ago. But on the other hand,
the phone empowered surveillance and the big brother.

The AIs will amplify this effect even more. They will empower people to a
higher degree than phones did, but they will make people easy targets for
manipulation. They will allow an extremely intimate knowledge about a person
and direct access to influence him/her, to the technologists who control the
AI.

------
delinka
|---|* About this far.

But seriously, we've had predictions of "strong AI in 5/10/20 years" for
decades now. The best we'll be able to do in such a short timeframe is what we
do now: study a problem, create a [complex] solution, code the solution in a
programming language. "AI Utopia" will arrive some decades after we've figured
out how to make a system that can learn and retain over the long-term. Much
like a biological brain.

Jeph Jacques' cartoon take on AI and sentient companions seems plausible: a
big central AI grows and evolves and often spins off smaller AIs to be placed
in androids. I don't know where the idea originated (it might be his?) but
it's a very reasonable fiction for where we'd originate individuals in a high
quantity.

(*not to scale)

~~~
herval
It's not like things advance linearly, though. Even the big minds in AI said
it would take decades before an AI would beat a human at Go, for instance.
Image recognition is another example that was "stuck" with relatively high
error rates for many, many years, and suddenly got "solved".

Our brains are just awful at predicting major breakthroughs...

------
pron
I find that many discussions of AI suffer from a basic confusion of
intelligence with the more general notion of computation, which results in a
severe overestimation of the power of intelligence and underestimation of the
power of non-intelligent computation. General intelligence -- at least as we
currently understand it to be -- is no more than a particular algorithm or a
class of algorithms, and therefore:

1\. Overestimation of intelligence: Computational complexity theory -- in
particular the time/space hierarchy theorems, which, IMO, are the most
important theorems in all of computer science (and are basically
generalizations of the halting theorem) -- tells us that problems form an
infinite hierarchy of difficulty, and that some problems require a certain
amount of computational resources regardless of the algorithm used. In
particular, we know of plenty of very important problems (e.g. NP-complete, or
PSPACE-complete) for which intelligence does not seem to help at all, and
those problems are faced in many real-world scenarios. Worse, we know that for
many of these problems, there can be no efficient approximate solution
employing _any_ algorithm. Artificial intelligence, therefore, cannot possibly
form an effective way to solve these very real problems.

2\. Underestimation of non-intelligent computation: It is unclear that the
"general intelligence algorithms" are the best use of computational resources.
Humans (and mammals) are far from being the most successful species on Earth
by almost any measure of success (we are neither the most plentiful
numerically, in biomass, or even in the most capable of shaping the planet).
It is therefore a mistake to believe that the "intelligence algorithms" pose
either the most severe threat or the most powerful "problem solving" approach.

Artificial intelligence seems to be particularly useful when interacting with
other intelligent beings (i.e. humans). For example, there has recently been
some success in employing an "intelligent" algorithm for winning the game of
Go. However, it is known that the "Go problem" is EXPTIME-complete, and
therefore the intelligent algorithm can only be successful when playing
against other players who employ a similar algorithm (namely, humans). It is
by no means a general solution to the "Go problem". As "disease" is not a
human problem, and pathogens employ computation in a very different way than
intelligent beings, it is hard to believe that AI would be able to eradicate
disease etc., etc..

