Hacker News new | comments | show | ask | jobs | submit login
Human society is unprepared for the rise of artificial intelligence (theatlantic.com)
46 points by ForHackernews 7 months ago | hide | past | web | favorite | 44 comments



Even in retirement, I’d wager Henry Kissinger is far more likely to mean the end of human history than AI is.


Hey, he's committed himself to improving the world by sitting on the boards of innovative companies like Theranos.


Retirement? The guy visited china and met with the leader of china to calm relations after trump got elected.

http://www.scmp.com/news/china/diplomacy-defence/article/205...

A german born professor with a thick accent to National Security Advisor and then Secretary of State overnight ( the third most powerful position in government )? How does that happen?

The guy is a war criminal but the media is in love with him.


The idea about 'untethered AIs' is a good one - what will ground an AI in human ethics? Currently they are designed to win. Will it be even possible to create an AI that 'cares' about the outcomes its calculating?

Even if it is possible to 'tether' AIs, won't we just remove those limits? Look what we do to fake persons today - corporations. They are like AIs in that small calculations made throughout the entity guide the decisions as a whole.

We cyclicly add boundaries to what they're allowed, then the next administration removes those boundaries in the interest of unbridled growth.

We'll do the same with AIs. They'll calculate a 'best outcome' of letting millions die in Indonesian drought, because average standard of living will increase for everyone else. In one administration we'll want to temper such decisions with humanity; in the next we'll just want more beer.


what will ground an AI in human ethics?

What grounds humans in "human ethics?" Is it thoughtful rational decision making or whims of the powerful at the time?

And which human ethics are you grounding in? Aristotelian, Cynic, Stoic, Theological?

There seems to be a common idea that humans all share some ethical framework or thread that is similar. I don't think that holds. In fact I don't know if there is any single ethic that you could argue every single human holds similarly. Maybe you could argue, will to survive and reproduce, but even that is highly variable.

The idea that we should or could build some kind of ethical homogeneity into our engineered intelligent systems is absolutely silly.


It seems you grossly misunderstand ethics by portraying it as some kind of arbitrary system (or "framework" per your word choice, a word I've come to despise) imposed on people, or some kind of code voluntarily adopted but ultimately without ground. People may disagree about what is good, or what the correct moral principles are, but that does not mean there isn't a proper object of study and that that object is objective and universal. Cultures can be wrong. People can be wrong. People can be ignorant. Their moral intuitions and their ethical judgments can be deformed and at odds with the truth.

In any case, moral skepticism and moral relativism are not only worthless, but incoherent and untenable and nothing anyone of any philosophical sophistication holds to. It is the quintessential freshman philosophy student's position, a position born out of ignorance and teenage rebelliousness rather than the careful exercise of reason.


Pray tell, what "objective and universal" object are you referring to?


There are some characteristics, shaped by evolution, that are typically human. For instance, humans are social.

That doesn't mean that you can derive a formal system of ethics that is "true", but there are clearly some trends. Trends that an AI, that have not evolved as us, don't need to share. I don't think is a sterile debate.


For instance, humans are social

As are honeybees. I fail to see how this (or any) feature is distinguishing when describing how one should build an ethical framework into an engineered system. If however there was a more nuanced pro-social univerally attributable aspect that could be described and modeled, then I think that would be a helpful addition.

I don't think is a sterile debate.

It's anything but sterile. However what is sterile is the idea that there exists a desirable ethical system for AGI that is based on human ethics.


>>"I fail to see how this (or any) feature is distinguishing when describing how one should build an ethical framework into an engineered system. "

My point is that human ethics is grounded in human values and those are grounded in our evolutionary history. As we share an evolutionary history it would be not strange that there are a lot of values that we share. Maybe they are difficult to see because they appear to us as obviously true, but this is not going to be the case with machines.

Presumably, we want machines that care about the same things that us.


Maybe they are difficult to see because they appear to us as obviously true

I argue they are difficult to see because they don't exist. For example "Don't Kill" would be the obvious values that we share - until it's not a value we share. Maybe it's a value we ought to share, but as the dozens of examples in philosophy show (trolley problem etc...) in fact it's not something that we can hard code. Ok, so maybe it's "it's ok to kill in the following circumstances..." in which case a brief history of humanity would show massive changes in what those circumstances are over the last 10,000 years - so that's a bad candidate for a "value" set as well.

Presumably, we want machines that care about the same things that us.

This is the going argument and has been since the concept of "friendliness" in AI came about. Trouble is, we can't seem to pin down the things we as a human race collectively "care about" to the granularity that would be required to write machine instruction around it. I'm arguing there aren't any collective values that everyone would agree upon which would inform such a discussion.


Thou Shalt Not Kill is a pretty good start


This is a common example used, to which the common question is "Is there absolutely no case in which killing is a justified and moral position?" Even then you must specify further. "Don't kill" but only humans or all things?

If another human is attacking you with intent to kill and you have no other options? How about in a case like the trolley problem [1] where you are forced into a challenge of scale? These are all valid problems with designing current engineering systems, not just AI systems.

It's possible to philosophically "bite the bullet"[2] and say "No, it's never justified or moral" however that doesn't pass the test of practicality.

So, something as seemingly obvious as "Don't kill" doesn't have firm boundaries which we can create a universal set of mathematically implementable, and acceptable, rules for.

[1]https://en.wikipedia.org/wiki/Trolley_problem

[2]https://en.wikipedia.org/wiki/Bite_the_bullet#In_philosophy


Is there no spectrum between absolutes? Anyway the AI will have no concept of the notion, unless its part of the calculation. And Utilitarian notions are notoriously repugnant.


Yeah, right. The problem is that if you think militaries don’t have plans for AI you’re nuts. Blanket prohibitions against killing and automated killing machines are incompatible. The civilian sector can be as moral as it wants, yet ironically it will be the AI most able to kill that will be least restrained from killing.


I know that the very first time I watched the MIT robot cheetah, I instantly imagined the version that is surely coming: that robot with a gun mounted on it, hunting down a human[0].

[0] https://www.youtube.com/watch?v=_luhn7TLfWU


At the article's end: '[...] AI developers, as inexperienced in politics and philosophy as I am in technology [...]'.

What a nice piece of flawed logic: 'I am not educated in technology but I'm educated in politics and philosophy, therefore anyone educated in technology is not educated in politics nor philosophy'.


He didn't say all developers working on AI are inexperienced in politics. He was simply restating his thesis which was to appeal to those who hadn't considered what he was writing about for a skepticism of the consequences of AI.


You seem to be under the impression that your own sentence equates to the author's. It does not.


Why do you think there's an implied causal relationship? In my eyes, this statement is an analogy between two separate observations. The author is probably aware that correlation does not imply causation.


I don't see how you could read that from the sentence. There's no logical inference in that fragment, just a claim that people in technology are as inexperienced in politics and philosophy as he is in technology. On the whole, that is an accurate observation.


From the Wikipedia page https://en.wikipedia.org/wiki/The_Two_Cultures

"In his opening address at the Munich Security Conference in January 2014, the Estonian president Toomas Hendrik Ilves said that the current problems related to security and freedom in cyberspace are the culmination of absence of dialogue between "the two cultures": "Today, bereft of understanding of fundamental issues and writings in the development of liberal democracy, computer geeks devise ever better ways to track people... simply because they can and it's cool. Humanists on the other hand do not understand the underlying technology and are convinced, for example, that tracking meta-data means the government reads their emails."[12]"


They're perfectly fine with the last point. While the govt ous not reading emails they can infer enough just given the metadata that reading the actual contents is irrelevant.

(A detective can do so as well. But that is much more expensive and doesn't scale.)


I still haven't found a good talk or essay about how exactly 'AI' will change anything. The only obvious difference is shifts in wealth.

The Google Duplex appears to be a great breakthrough. But what happens if it's sold and deployed? More wealth to an already wealthy Google. Low income earners who can barely make it as a receptionist will be out of work. In the restaurant example, owners cut costs and possibly earn more -- which is great for these types of operators. But as someone who wants to schedule an appointment, my life doesn't change.

Google also just released auto complete and response emails. How does something like this change my life?

Any drastic change in AI is abstract and hypothetical, based more in science fiction than reality. In the 1940s, with advances in Nuclear research, it was thought that nuclear power for energy would change the world. 70 years later, we're still using over 80% fossil fuels. Not a great example, but I'll hold this line about 'AI' until I'm convinced otherwise.


I'm in a similar skepticism camp. I studied a lot of AI and Big Data in Graduate school, but haven't done much with it after graduation. My impression from my studies and what I read of things today is that the tools haven't really changed since the last big AI boom in the '60s. There have been incremental enhancements to many of the algorithms, but the algorithms themselves haven't had major breakthroughs.

What has changed is we've accelerated GIGO to an extraordinary level that the '60s could have only imagined/dreamed of.

With a firehose of inputs on one side and a firehose of outputs on the other side, it's easy for the man-behind-the-curtain/model-in-the-machine to seem quite capable/intelligent, but how are we QA/QCing that that is actually the case?

Take everyone's growing frustrations with auto-correct systems using words they'd never themselves use, are those mistakes in the model that will get trained away eventually or representative of the firehose of garbage-in/garbage-out. Similarly, when was the last time that YouTube's "Auto-Play" feature was useful to you? I find I've turned it off on every device I own because it is nonsensical and useless and often full of literal garbage.

Some of the early logic mathematicians such as Boole and Bayes, names that come up commonly in AI algorithms, thought they might find proof of God in probability theory, and I sometimes wonder if we may find that probability theory may not prove God, but could definitely prove that (soylent) Devils are real. What's going on with machine learning in YouTube and in the ad spaces frightens me in aggregate, and the problem is still people, but we've massively accelerated GIGO around people.


Leaving out AI, Robin Hanson has an entire book on what happens when we "only" start to emulate existing human thought in machines, what he calls "ems". Given only that, no new algorithms, just an increase in speed, he goes into some detail predicting economic effects.[0][1]

[0] https://www.youtube.com/watch?v=XP-p-O200jo

[1] https://www.amazon.com/Age-Em-Work-Robots-Earth/dp/019875462...


Kissinger is right.

The number of strategic threats due to AI are exploding. Social media manipulation online, voice to text from tiny microphones, hacked autonomous devices, hyper targeted predictions of unstable systems, autonomous devices smuggling dangerous material (arms, fentanyl, fissile material), or even hard AI in the far future.

It's even decaying democracy itself. I know plenty of good people that would make great politicians but absolutely refuse to run because they understand how much dirt OSINT would pull up on them and AI only makes this worse because of advanced stylometry.

The problems with cyber are real. AI and cyber security in an increasingly cyber physical world are something we aren't prepared for. We can't even get non-trivial amounts of humans to believe that global warming is happening let alone anthropogenic. Since at least 2016 the CIA has listed cyber as the number one threat to global security each year, despite the DPRK getting access to usable nuclear weapons and ICBMs.

That's the scale of the risk / hazard we face with AI and cyber security over the next decade. It's worse than North Korea getting nuclear weapons.


Ironically this reads like a very well written student essay despite being written by a statesman at the dusk of his life.

Why? Well, as he says at the front of his essay, he'd never considered AI (a field funded by ARPA (later DARPA) through the height of his power) and so his musings (whoah, a car driver needs to consider novel situations! People ask the Internet for stuff immediately relevant to their needs and interests!) are, frankly, the first thoughts anyone will have when they first think of machines that think. The thoughts I had as a teenager in the 1970s and I don't claim to have had any special talent.

I always believed that Kissinger is very smart, and this essay doesn't change that. But I always believed he was intellectually extremely lazy (rather than being the steely-eyed Vulkan of Realpolitik who could see through the bullshit) and this essay also doesn't change that.


> immobilizes his or her opponent by more effectively controlling territory

I like how he even talks about go like an imperialist.


Well, Kissinger's interventions have meant the end of many humans in history as well...


History ended in 1989. Surely Kissinger was listening.

https://en.wikipedia.org/wiki/End_of_history


This is a good article.

Kissinger begins not at the purely hypothetical "singularity" of Ray Kurweil or superintelligence of Nick Bostrom but at the present where the unbridled flow of data on the Internet already threatens human society's ability to coherently consider it's future. Nearly every AI scare story (Musk, Bostrom, etc) talks about the threat of thinking-AI while ignoring the increasing and immediate impact of "dumb-AI" as an extension of the blind data driven logic of today.


Rather: "Human society unprepared for the misuse of ML by other humans". Fear other humans, not machines (yet).


Towards the end of the article Kissinger states

"Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions."

Which seems to me a disappointing end to the article. An appeal for a national effort to manage AI.

"Other countries have major AI projects" but what exactly should the US model itself after?

"The United States has not yet systematically explored it's scope" but US publishes the second most research papers on AI (https://www.timeshighereducation.com/data-bites/which-countr...) which I think in general is a bad metric but if you're looking at the output that a system produces, it's the best metric you're going to get.

Looking at the private sector, the AI makeup of the leading AI/Robotics ETF $BOTZ (https://www.globalxfunds.com/funds/botz/) is comprised of mostly American companies. Similarly, look at the sheer number of ML/AI startups in SV.

So I fail to see where the crisis is. If American universities are among the leaders in AI research and American companies are among the leaders in the AI economy, why is there such a tone of urgency in this article? Kissinger's argument leads me to believe that he's advocating for a blanket "AI" initiative but he doesn't have a clear idea of what he wants this initiative to do. Without a clear direction for how he wants the US to be "relating AI to humanistic traditions", whatever he's proposing here is just the marriage of shallow musings about the consequences of AI and some kind of blind belief in federal government initiative.


So, any new thought?


Henry Kissinger: concerned humanist

I love dark irony first thing in the morning...


Can't Henry Kissinger just die already?

He won the Nobel Peace Prize for ending the Vietnam War. Only in the 90's we learned that he and Nixon torpedoed the '68 Vietnam peace talks under Johnson to make Johnson look bad before the election.

<https://www.smithsonianmag.com/smart-news/notes-indicate-nix...

This is public information. We know that part of what Nixon was looking for during the Watergate break ins was evidence that Johnson kept about this. Yet publications like The Atlantic still allow this criminal Kissinger to publish.

From: <https://www.commondreams.org/views/2014/08/12/george-will-co...

In the Price of Power (1983), Seymour Hersh revealed Henry Kissinger—then Johnson’s adviser on Vietnam peace talks secretly alerted Nixon’s staff that a truce was imminent.

According to Hersh, Nixon “was able to get a series of messages to the Thieu government [of South Vietnam] making it clear that a Nixon presidency would have different views on peace negotiations.”

Johnson was livid. He even called the Republican Senate Minority Leader, Everett Dirksen, to complain that “they oughtn’t be doing this. This is treason.”

“I know,” was Dirksen’s feeble reply.

Johnson blasted Nixon about this on November 3rd, just prior to the election. As Robert Parry of Consortiumnews.com has written: “when Johnson confronted Nixon with evidence of the peace-talk sabotage, Nixon insisted on his innocence but acknowledged that he knew what was at stake.”

Said Nixon: “My, I would never do anything to encourage….Saigon not to come to the table….Good God, we’ve got to get them to Paris or you can’t have peace.”

But South Vietnamese President General Theiu—a notorious drug and gun runner—did boycott Johnson’s Paris peace talks. With the war still raging, Nixon claimed a narrow victory over Humphrey. He then made Kissinger his own national security adviser.

In the four years between the sabotage and what Kissinger termed “peace at hand” just prior to the 1972 election, more than 20,000 US troops died in Vietnam. More than 100,000 were wounded. More than a million Vietnamese were killed.


That's only one of his exploits. That this guy talk with a straight face about "enlightenment" and "ethics" is "funny".


Off-topic rant aside, Can't Henry Kissinger just die already? is a despicable thing to say about anyone. If you want to get a point across, don’t rant it, and don’t open with something vile. Not liking someone, or a conviction that they’re a terrible person is no excuse.


This entire post is ad-hominem and does not make an argument against the content of the linked article.


The post could be not related to the subject of the article but it's related to the author of the article.

It's clearly not ad-hominem because the comment is not rejecting the claims of the article, only expressing his opinion about the author.

On the other hand, I have to agree with the criticism about the tone that other comment express. It never improves the debate to wish somebody dead.


In which case, it's an off topic comment that is not relevant to the content of the article.


Apparently, but is that the case?

After all, both, the article and the comment, are talking about the dangers of power unrestricted by ethics.


It could be viewed as providing useful historical context. I don't have an opinion on the matter, as I don't really know much about Kissinger, but it seems like a possibility at least.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: