
Human society is unprepared for the rise of artificial intelligence - ForHackernews
https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/?single_page=true
======
bradleybuda
Even in retirement, I’d wager Henry Kissinger is far more likely to mean the
end of human history than AI is.

~~~
wmil
Hey, he's committed himself to improving the world by sitting on the boards of
innovative companies like Theranos.

------
JoeAltmaier
The idea about 'untethered AIs' is a good one - what will ground an AI in
human ethics? Currently they are designed to win. Will it be even possible to
create an AI that 'cares' about the outcomes its calculating?

Even if it is possible to 'tether' AIs, won't we just remove those limits?
Look what we do to fake persons today - corporations. They are like AIs in
that small calculations made throughout the entity guide the decisions as a
whole.

We cyclicly add boundaries to what they're allowed, then the next
administration removes those boundaries in the interest of unbridled growth.

We'll do the same with AIs. They'll calculate a 'best outcome' of letting
millions die in Indonesian drought, because average standard of living will
increase for everyone else. In one administration we'll want to temper such
decisions with humanity; in the next we'll just want more beer.

~~~
AndrewKemendo
_what will ground an AI in human ethics?_

What grounds humans in "human ethics?" Is it thoughtful rational decision
making or whims of the powerful at the time?

And which human ethics are you grounding in? Aristotelian, Cynic, Stoic,
Theological?

There seems to be a common idea that humans all share some ethical framework
or thread that is similar. I don't think that holds. In fact I don't know if
there is any single ethic that you could argue every single human holds
similarly. Maybe you could argue, will to survive and reproduce, but even that
is highly variable.

The idea that we should or could build some kind of ethical homogeneity into
our engineered intelligent systems is absolutely silly.

~~~
JoeAltmaier
Thou Shalt Not Kill is a pretty good start

~~~
AndrewKemendo
This is a common example used, to which the common question is "Is there
absolutely no case in which killing is a justified and moral position?" Even
then you must specify further. "Don't kill" but only humans or all things?

If another human is attacking you with intent to kill and you have no other
options? How about in a case like the trolley problem [1] where you are forced
into a challenge of scale? These are all valid problems with designing current
engineering systems, not just AI systems.

It's possible to philosophically "bite the bullet"[2] and say "No, it's never
justified or moral" however that doesn't pass the test of practicality.

So, something as seemingly obvious as "Don't kill" doesn't have firm
boundaries which we can create a universal set of mathematically
implementable, and acceptable, rules for.

[1][https://en.wikipedia.org/wiki/Trolley_problem](https://en.wikipedia.org/wiki/Trolley_problem)

[2][https://en.wikipedia.org/wiki/Bite_the_bullet#In_philosophy](https://en.wikipedia.org/wiki/Bite_the_bullet#In_philosophy)

~~~
JoeAltmaier
Is there no spectrum between absolutes? Anyway the AI will have no concept of
the notion, unless its part of the calculation. And Utilitarian notions are
notoriously repugnant.

------
bpizzi
At the article's end: '[...] AI developers, as inexperienced in politics and
philosophy as I am in technology [...]'.

What a nice piece of flawed logic: 'I am not educated in technology but I'm
educated in politics and philosophy, therefore anyone educated in technology
is not educated in politics nor philosophy'.

~~~
bobthechef
I don't see how you could read that from the sentence. There's no logical
inference in that fragment, just a claim that people in technology are as
inexperienced in politics and philosophy as he is in technology. On the whole,
that is an accurate observation.

~~~
theoh
From the Wikipedia page
[https://en.wikipedia.org/wiki/The_Two_Cultures](https://en.wikipedia.org/wiki/The_Two_Cultures)

"In his opening address at the Munich Security Conference in January 2014, the
Estonian president Toomas Hendrik Ilves said that the current problems related
to security and freedom in cyberspace are the culmination of absence of
dialogue between "the two cultures": "Today, bereft of understanding of
fundamental issues and writings in the development of liberal democracy,
computer geeks devise ever better ways to track people... simply because they
can and it's cool. Humanists on the other hand do not understand the
underlying technology and are convinced, for example, that tracking meta-data
means the government reads their emails."[12]"

~~~
AstralStorm
They're perfectly fine with the last point. While the govt ous not reading
emails they can infer enough just given the metadata that reading the actual
contents is irrelevant.

(A detective can do so as well. But that is much more expensive and doesn't
scale.)

------
tgb29
I still haven't found a good talk or essay about how exactly 'AI' will change
anything. The only obvious difference is shifts in wealth.

The Google Duplex appears to be a great breakthrough. But what happens if it's
sold and deployed? More wealth to an already wealthy Google. Low income
earners who can barely make it as a receptionist will be out of work. In the
restaurant example, owners cut costs and possibly earn more -- which is great
for these types of operators. But as someone who wants to schedule an
appointment, my life doesn't change.

Google also just released auto complete and response emails. How does
something like this change my life?

Any drastic change in AI is abstract and hypothetical, based more in science
fiction than reality. In the 1940s, with advances in Nuclear research, it was
thought that nuclear power for energy would change the world. 70 years later,
we're still using over 80% fossil fuels. Not a great example, but I'll hold
this line about 'AI' until I'm convinced otherwise.

~~~
WorldMaker
I'm in a similar skepticism camp. I studied a lot of AI and Big Data in
Graduate school, but haven't done much with it after graduation. My impression
from my studies and what I read of things today is that the tools haven't
really changed since the last big AI boom in the '60s. There have been
incremental enhancements to many of the algorithms, but the algorithms
themselves haven't had major breakthroughs.

What has changed is we've accelerated GIGO to an extraordinary level that the
'60s could have only imagined/dreamed of.

With a firehose of inputs on one side and a firehose of outputs on the other
side, it's easy for the man-behind-the-curtain/model-in-the-machine to seem
quite capable/intelligent, but how are we QA/QCing that that is actually the
case?

Take everyone's growing frustrations with auto-correct systems using words
they'd never themselves use, are those mistakes in the model that will get
trained away eventually or representative of the firehose of garbage-
in/garbage-out. Similarly, when was the last time that YouTube's "Auto-Play"
feature was useful to you? I find I've turned it off on every device I own
because it is nonsensical and useless and often full of literal garbage.

Some of the early logic mathematicians such as Boole and Bayes, names that
come up commonly in AI algorithms, thought they might find proof of God in
probability theory, and I sometimes wonder if we may find that probability
theory may not prove God, but could definitely prove that (soylent) Devils are
real. What's going on with machine learning in YouTube and in the ad spaces
frightens me in aggregate, and the problem is still people, but we've
massively accelerated GIGO around people.

------
3pt14159
Kissinger is right.

The number of strategic threats due to AI are exploding. Social media
manipulation online, voice to text from tiny microphones, hacked autonomous
devices, hyper targeted predictions of unstable systems, autonomous devices
smuggling dangerous material (arms, fentanyl, fissile material), or even hard
AI in the far future.

It's even decaying democracy itself. I know plenty of good people that would
make great politicians but absolutely refuse to run because they understand
how much dirt OSINT would pull up on them and AI only makes this worse because
of advanced stylometry.

The problems with cyber are real. AI and cyber security in an increasingly
cyber physical world are something we aren't prepared for. We can't even get
non-trivial amounts of humans to believe that global warming is _happening_
let alone anthropogenic. Since at least 2016 the CIA has listed cyber as the
number one threat to global security each year, despite the DPRK getting
access to usable nuclear weapons and ICBMs.

That's the scale of the risk / hazard we face with AI and cyber security over
the next decade. It's worse than North Korea getting nuclear weapons.

------
gumby
Ironically this reads like a very well written student essay despite being
written by a statesman at the dusk of his life.

Why? Well, as he says at the front of his essay, he'd never considered AI (a
field funded by ARPA (later DARPA) through the height of his power) and so his
musings (whoah, a car driver needs to consider novel situations! People ask
the Internet for stuff immediately relevant to their needs and interests!)
are, frankly, the first thoughts _anyone_ will have when they first think of
machines that think. The thoughts I had as a teenager in the 1970s and I don't
claim to have had any special talent.

I always believed that Kissinger is very smart, and this essay doesn't change
that. But I always believed he was intellectually extremely lazy (rather than
being the steely-eyed Vulkan of Realpolitik who could see through the
bullshit) and this essay also doesn't change that.

------
samirillian
> immobilizes his or her opponent by more effectively controlling territory

I like how he even talks about go like an imperialist.

------
coldtea
Well, Kissinger's interventions have meant the end of many humans in history
as well...

------
hprotagonist
History ended in 1989. Surely Kissinger was listening.

[https://en.wikipedia.org/wiki/End_of_history](https://en.wikipedia.org/wiki/End_of_history)

------
joe_the_user
This is a good article.

Kissinger begins not at the purely hypothetical "singularity" of Ray Kurweil
or superintelligence of Nick Bostrom but at the present where the unbridled
flow of data on the Internet already threatens human society's ability to
coherently consider it's future. Nearly every AI scare story (Musk, Bostrom,
etc) talks about the threat of thinking-AI while ignoring the increasing and
immediate impact of "dumb-AI" _as an extension of_ the blind data driven logic
of today.

------
starchild_3001
Rather: "Human society unprepared for the misuse of ML by other humans". Fear
other humans, not machines (yet).

------
NickLamp
Towards the end of the article Kissinger states

"Other countries have made AI a major national project. The United States has
not yet, as a nation, systematically explored its full scope, studied its
implications, or begun the process of ultimate learning. This should be given
a high national priority, above all, from the point of view of relating AI to
humanistic traditions."

Which seems to me a disappointing end to the article. An appeal for a national
effort to manage AI.

"Other countries have major AI projects" but what exactly should the US model
itself after?

"The United States has not yet systematically explored it's scope" but US
publishes the second most research papers on AI
([https://www.timeshighereducation.com/data-bites/which-
countr...](https://www.timeshighereducation.com/data-bites/which-countries-
and-universities-are-leading-ai-research)) which I think in general is a bad
metric but if you're looking at the output that a system produces, it's the
best metric you're going to get.

Looking at the private sector, the AI makeup of the leading AI/Robotics ETF
$BOTZ
([https://www.globalxfunds.com/funds/botz/](https://www.globalxfunds.com/funds/botz/))
is comprised of mostly American companies. Similarly, look at the sheer number
of ML/AI startups in SV.

So I fail to see where the crisis is. If American universities are among the
leaders in AI research and American companies are among the leaders in the AI
economy, why is there such a tone of urgency in this article? Kissinger's
argument leads me to believe that he's advocating for a blanket "AI"
initiative but he doesn't have a clear idea of what he wants this initiative
to do. Without a clear direction for how he wants the US to be "relating AI to
humanistic traditions", whatever he's proposing here is just the marriage of
shallow musings about the consequences of AI and some kind of blind belief in
federal government initiative.

------
lostmsu
So, any new thought?

------
clarkmoody
Henry Kissinger: concerned humanist

I love dark irony first thing in the morning...

------
AndyMcConachie
Can't Henry Kissinger just die already?

He won the Nobel Peace Prize for ending the Vietnam War. Only in the 90's we
learned that he and Nixon torpedoed the '68 Vietnam peace talks under Johnson
to make Johnson look bad before the election.

<[https://www.smithsonianmag.com/smart-news/notes-indicate-
nix...](https://www.smithsonianmag.com/smart-news/notes-indicate-nixon-
interfered-1968-peace-talks-180961627/>)

This is public information. We know that part of what Nixon was looking for
during the Watergate break ins was evidence that Johnson kept about this. Yet
publications like The Atlantic still allow this criminal Kissinger to publish.

From: <[https://www.commondreams.org/views/2014/08/12/george-will-
co...](https://www.commondreams.org/views/2014/08/12/george-will-confirms-
nixons-vietnam-treason>)

In the Price of Power (1983), Seymour Hersh revealed Henry Kissinger—then
Johnson’s adviser on Vietnam peace talks secretly alerted Nixon’s staff that a
truce was imminent.

According to Hersh, Nixon “was able to get a series of messages to the Thieu
government [of South Vietnam] making it clear that a Nixon presidency would
have different views on peace negotiations.”

Johnson was livid. He even called the Republican Senate Minority Leader,
Everett Dirksen, to complain that “they oughtn’t be doing this. This is
treason.”

“I know,” was Dirksen’s feeble reply.

Johnson blasted Nixon about this on November 3rd, just prior to the election.
As Robert Parry of Consortiumnews.com has written: “when Johnson confronted
Nixon with evidence of the peace-talk sabotage, Nixon insisted on his
innocence but acknowledged that he knew what was at stake.”

Said Nixon: “My, I would never do anything to encourage….Saigon not to come to
the table….Good God, we’ve got to get them to Paris or you can’t have peace.”

But South Vietnamese President General Theiu—a notorious drug and gun
runner—did boycott Johnson’s Paris peace talks. With the war still raging,
Nixon claimed a narrow victory over Humphrey. He then made Kissinger his own
national security adviser.

In the four years between the sabotage and what Kissinger termed “peace at
hand” just prior to the 1972 election, more than 20,000 US troops died in
Vietnam. More than 100,000 were wounded. More than a million Vietnamese were
killed.

~~~
AndrewKemendo
This entire post is ad-hominem and does not make an argument against the
content of the linked article.

~~~
RobertoG
The post could be not related to the subject of the article but it's related
to the author of the article.

It's clearly not ad-hominem because the comment is not rejecting the claims of
the article, only expressing his opinion about the author.

On the other hand, I have to agree with the criticism about the tone that
other comment express. It never improves the debate to wish somebody dead.

~~~
AndrewKemendo
In which case, it's an off topic comment that is not relevant to the content
of the article.

~~~
RobertoG
Apparently, but is that the case?

After all, both, the article and the comment, are talking about the dangers of
power unrestricted by ethics.

