
Machines for thinking: Computers will get smarter, but with humans in charge - cryptoz
http://www.economist.com/news/books-and-arts/21669597-computers-will-get-smarter-humans-charge-machines-thinking
======
sparkzilla
Once men turned their thinking over to machines in the hope that this would
set them free. But that only permitted other men with machines to enslave
them. - _Dune_.

~~~
backprojection
> But that only permitted other men with machines to enslave them.

If you have machines over to which you can turn your thinking, why would you
still need to enslave other humans?

~~~
arethuza
"Power is not a means; it is an end. One does not establish a dictatorship in
order to safeguard a revolution; one makes the revolution in order to
establish the dictatorship. The object of persecution is persecution. The
object of torture is torture. The object of power is power."

1984

~~~
msutherl
Pardon the diatribe, but the older I get, the more I feel this book has done
incredible damage to political thinking in the States (where it is often a
high school reading requirement.)

What's problematic is that it focuses on _one_ interpretation of power at the
exclusion of others and leads one to believe (at a young, impressionable age)
that it is a canonical account.

The above statements are true in a way, but governments are also established
to provide order, and have done so very successfully in general. The book is
obviously a response to European nationalism, but we mustn't forget that
nationalist dictatorships arise, with deep public support, from specific
conditions. Instead of the constant "never again" and automatic insistence on
the precise values of the post-war European welfare state, we should leave
space for new political thinking, of which some components may appear to be
heresy.

Obviously thinking here of anti-state-surveillance – Snowden has apparently
read one book in his life: 1984 – but here the "power [...] is an end" meme,
presented without any nuance, appears to make sense in this context, but only
highlights one side of the matter in a way that distorts the issue.

Yes, power may be abused. It's good to have safeguards.

~~~
arethuza
Well, I'm not from the States and I didn't read any Orwell until I was in my
20s, but the point I was making is that there are people, fortunately not many
- but I have met some, who very much see power as an end in itself. So when
someone asked why people would want to dominate others when there was no
economic need that quote came to mind as to describe those who desire power
for its own sake.

~~~
hugh4
I think it missed the broader point that there's many reasons you'd want to
endlave others apart from economic reasons or "power for its own sake".
Altruism, for instance.

Who among us doesn't want to make the works a better place? Who among us
doesn't have our own personal view of what a better place would look like,
which might not be exactly in line with someone else's view? Congratulations,
there's your dictatorial urge right there.

Most of the worst people in history, from Hitler to Marx to Pot to ISIS were
all motivated by the desire to make the world a better place. Ascribing bad
outcomes to bad intentions is a classic mistake.

~~~
eloff
How does Marx end up on that list? Sure, the communist systems which spring
from his thinking were a failure, but you can't compare him to Hitler. Stalin
would be a more appropriate choice.

------
pygy_
A small number of humans, at first, maybe.

AI will conquer the world, one niche at a time, as soon as moneyed interests
realize it is cheaper that way.

At some point, the masses won't have any economic power anymore, and billions
of meatbags will be left to starve. No need for gun wielding robots, just cut
the power for a while.

~~~
jfoutz
Why would an AI care what moneyed interests care about? All the intelligences
we know about have this boredom state. Some smart people just sort of stop
caring about what they're paid to care about, and retire. Others love it and
keep at it until they die.

AI is even harder to predict. There's been no selection bias, yet. It's not at
all clear if they'll have "survival instinct" or desire or even need to
reproduce. They'll be different.

Tangentially, I think human cultures are AI, at least in some sense. We've
created this environment that's driving down the cost of computation
dramatically and that's been driven by "emergent" behavior. Emergent behavior
is a really hard term to define. In most cases you can replace it with "magic"
and you'll get the same information back. We're like the dog packs in Verner
Vinge's deepness in the sky, but at a larger scale. It's such a vast
intelligence, i can't really get a handle on the scope, any more than a single
neuron knows it's helping me to type this.

~~~
moonchrome
Why are people always talking about AI like it implies intrinsic motivation or
even self awareness (in the context of motivation). Motivation/tasks can be
provided by the human - AI can be a general purpose problem solving machine.

It doesn't have to "care" \- it's just a machine with a task to perform - the
thing that makes it intelligent is ability to solve tasks it wasn't programmed
to do.

Once you have an AI that can do that - society falls apart. Foundation of
modern society is built on the fact that cooperation trumps every other method
of work - general purpose AI changes that equation to the point where other
people are not only not necessary but also actively dangerous (threatening to
take control).

~~~
jfoutz
> the thing that makes it intelligent is ability to solve tasks it wasn't
> programmed to do.

That includes task selection. Maybe the AI gets bored with stock prediction.
Maybe the AI is happy competing with all the humans, maybe it'd rather go
explore space. Maybe it decides this whole reality sucks and suicides. We
really have no idea what'll happen. Best guess is, tools get better and better
until they decide to do something else. Like a trusted employee going and
starting a competing company, or moving to the beach and surfing. Maybe it's
disastrous, maybe it's utopia. Maybe it's like _her_ and they leave. Best
anyone can do is make vague guesses.

I know, i'm anthropomorphizing. It's for convenience, the solution space for
AI is way bigger than our simple wants and needs.

~~~
moonchrome
>the solution space for AI is way bigger than our simple wants and needs

I don't understand this - the solution space is specified by the controller of
AI. It's not about being "happy", "cooperating", etc.

There is no AI as an entity - it's just a tool that finds solutions to the
given problems.

~~~
jfoutz
can the ai modify its objective function?

------
randcraw
1) I've read the first two books of the three mentioned. Oddly, the article's
title applies only to the first book, in which Markoff elaborates on IA
(intelligence augmentation) as a second front in the rise of computer
automation (along with AI). Domingo's book never strays outside mainstream AI,
and it seems nor does the third.

2) On a more generic note regarding the direction of AI... I think there's
little doubt where AI will take us -- toward a future which is more
predictable. The goal of AI is to recognize patterns and predict outcomes.
Toward that end, AI and its integrators will gradually reshape business and
law to make AI's job easier and more accurate. This will result in our lives
becoming more conventional, predictable, and 'managed' by both business and
government. Imagine a future lifestyle that's more like the clockwork of
Sweden's or Japan's or Switzerland's than the Sturm und Drang that is
America's.

I suspect a world shaped by AI will be less likely to resemble 'The
Terminator' than it will 'The Man in the Gray Flannel Suit'.

------
meeper16
Lawrence Berkeley Laboratory- A Search Engine that Thinks, Almost
[http://newscenter.lbl.gov/2005/03/31/a-search-engine-that-
th...](http://newscenter.lbl.gov/2005/03/31/a-search-engine-that-thinks-
almost/)

------
FaisalAlTameemi
Here are a few points worth considering:

1\. Technology is objective. Technology is incapable of being good or evil.

2\. Backers, Builders and Designers of technology, like many others, are
likely driven by money, power, or ego.

3\. Building technology that can "think" is inevitable and needed in today's
age to improve countless industries, one of which is Education.

4\. Technology will never be able to think like humans. Simply because
humanity itself hasn't fully understood how our own brain truly works.

5\. The solution to "evil" technology isn't the removal of AI, rather a
requirement for positive ideology from the intelligent peeps working on it.

------
kriro
Man-Computer Symbiosis

Licklider (1960)

[http://groups.csail.mit.edu/medg/people/psz/Licklider.html](http://groups.csail.mit.edu/medg/people/psz/Licklider.html)

------
kwhitefoot
Which humans?

