A german born professor with a thick accent to National Security Advisor and then Secretary of State overnight ( the third most powerful position in government )? How does that happen?
The guy is a war criminal but the media is in love with him.
Even if it is possible to 'tether' AIs, won't we just remove those limits? Look what we do to fake persons today - corporations. They are like AIs in that small calculations made throughout the entity guide the decisions as a whole.
We cyclicly add boundaries to what they're allowed, then the next administration removes those boundaries in the interest of unbridled growth.
We'll do the same with AIs. They'll calculate a 'best outcome' of letting millions die in Indonesian drought, because average standard of living will increase for everyone else. In one administration we'll want to temper such decisions with humanity; in the next we'll just want more beer.
What grounds humans in "human ethics?" Is it thoughtful rational decision making or whims of the powerful at the time?
And which human ethics are you grounding in? Aristotelian, Cynic, Stoic, Theological?
There seems to be a common idea that humans all share some ethical framework or thread that is similar. I don't think that holds. In fact I don't know if there is any single ethic that you could argue every single human holds similarly. Maybe you could argue, will to survive and reproduce, but even that is highly variable.
The idea that we should or could build some kind of ethical homogeneity into our engineered intelligent systems is absolutely silly.
In any case, moral skepticism and moral relativism are not only worthless, but incoherent and untenable and nothing anyone of any philosophical sophistication holds to. It is the quintessential freshman philosophy student's position, a position born out of ignorance and teenage rebelliousness rather than the careful exercise of reason.
That doesn't mean that you can derive a formal system of ethics that is "true", but there are clearly some trends. Trends that an AI, that have not evolved as us, don't need to share. I don't think is a sterile debate.
As are honeybees. I fail to see how this (or any) feature is distinguishing when describing how one should build an ethical framework into an engineered system. If however there was a more nuanced pro-social univerally attributable aspect that could be described and modeled, then I think that would be a helpful addition.
I don't think is a sterile debate.
It's anything but sterile. However what is sterile is the idea that there exists a desirable ethical system for AGI that is based on human ethics.
My point is that human ethics is grounded in human values and those are grounded in our evolutionary history. As we share an evolutionary history it would be not strange that there are a lot of values that we share. Maybe they are difficult to see because they appear to us as obviously true, but this is not going to be the case with machines.
Presumably, we want machines that care about the same things that us.
I argue they are difficult to see because they don't exist. For example "Don't Kill" would be the obvious values that we share - until it's not a value we share. Maybe it's a value we ought to share, but as the dozens of examples in philosophy show (trolley problem etc...) in fact it's not something that we can hard code. Ok, so maybe it's "it's ok to kill in the following circumstances..." in which case a brief history of humanity would show massive changes in what those circumstances are over the last 10,000 years - so that's a bad candidate for a "value" set as well.
This is the going argument and has been since the concept of "friendliness" in AI came about. Trouble is, we can't seem to pin down the things we as a human race collectively "care about" to the granularity that would be required to write machine instruction around it. I'm arguing there aren't any collective values that everyone would agree upon which would inform such a discussion.
If another human is attacking you with intent to kill and you have no other options? How about in a case like the trolley problem  where you are forced into a challenge of scale? These are all valid problems with designing current engineering systems, not just AI systems.
It's possible to philosophically "bite the bullet" and say "No, it's never justified or moral" however that doesn't pass the test of practicality.
So, something as seemingly obvious as "Don't kill" doesn't have firm boundaries which we can create a universal set of mathematically implementable, and acceptable, rules for.
What a nice piece of flawed logic: 'I am not educated in technology but I'm educated in politics and philosophy, therefore anyone educated in technology is not educated in politics nor philosophy'.
"In his opening address at the Munich Security Conference in January 2014, the Estonian president Toomas Hendrik Ilves said that the current problems related to security and freedom in cyberspace are the culmination of absence of dialogue between "the two cultures": "Today, bereft of understanding of fundamental issues and writings in the development of liberal democracy, computer geeks devise ever better ways to track people... simply because they can and it's cool. Humanists on the other hand do not understand the underlying technology and are convinced, for example, that tracking meta-data means the government reads their emails.""
(A detective can do so as well. But that is much more expensive and doesn't scale.)
The Google Duplex appears to be a great breakthrough. But what happens if it's sold and deployed? More wealth to an already wealthy Google. Low income earners who can barely make it as a receptionist will be out of work. In the restaurant example, owners cut costs and possibly earn more -- which is great for these types of operators. But as someone who wants to schedule an appointment, my life doesn't change.
Google also just released auto complete and response emails. How does something like this change my life?
Any drastic change in AI is abstract and hypothetical, based more in science fiction than reality. In the 1940s, with advances in Nuclear research, it was thought that nuclear power for energy would change the world. 70 years later, we're still using over 80% fossil fuels. Not a great example, but I'll hold this line about 'AI' until I'm convinced otherwise.
What has changed is we've accelerated GIGO to an extraordinary level that the '60s could have only imagined/dreamed of.
With a firehose of inputs on one side and a firehose of outputs on the other side, it's easy for the man-behind-the-curtain/model-in-the-machine to seem quite capable/intelligent, but how are we QA/QCing that that is actually the case?
Take everyone's growing frustrations with auto-correct systems using words they'd never themselves use, are those mistakes in the model that will get trained away eventually or representative of the firehose of garbage-in/garbage-out. Similarly, when was the last time that YouTube's "Auto-Play" feature was useful to you? I find I've turned it off on every device I own because it is nonsensical and useless and often full of literal garbage.
Some of the early logic mathematicians such as Boole and Bayes, names that come up commonly in AI algorithms, thought they might find proof of God in probability theory, and I sometimes wonder if we may find that probability theory may not prove God, but could definitely prove that (soylent) Devils are real. What's going on with machine learning in YouTube and in the ad spaces frightens me in aggregate, and the problem is still people, but we've massively accelerated GIGO around people.
The number of strategic threats due to AI are exploding. Social media manipulation online, voice to text from tiny microphones, hacked autonomous devices, hyper targeted predictions of unstable systems, autonomous devices smuggling dangerous material (arms, fentanyl, fissile material), or even hard AI in the far future.
It's even decaying democracy itself. I know plenty of good people that would make great politicians but absolutely refuse to run because they understand how much dirt OSINT would pull up on them and AI only makes this worse because of advanced stylometry.
The problems with cyber are real. AI and cyber security in an increasingly cyber physical world are something we aren't prepared for. We can't even get non-trivial amounts of humans to believe that global warming is happening let alone anthropogenic. Since at least 2016 the CIA has listed cyber as the number one threat to global security each year, despite the DPRK getting access to usable nuclear weapons and ICBMs.
That's the scale of the risk / hazard we face with AI and cyber security over the next decade. It's worse than North Korea getting nuclear weapons.
Why? Well, as he says at the front of his essay, he'd never considered AI (a field funded by ARPA (later DARPA) through the height of his power) and so his musings (whoah, a car driver needs to consider novel situations! People ask the Internet for stuff immediately relevant to their needs and interests!) are, frankly, the first thoughts anyone will have when they first think of machines that think. The thoughts I had as a teenager in the 1970s and I don't claim to have had any special talent.
I always believed that Kissinger is very smart, and this essay doesn't change that. But I always believed he was intellectually extremely lazy (rather than being the steely-eyed Vulkan of Realpolitik who could see through the bullshit) and this essay also doesn't change that.
I like how he even talks about go like an imperialist.
Kissinger begins not at the purely hypothetical "singularity" of Ray Kurweil or superintelligence of Nick Bostrom but at the present where the unbridled flow of data on the Internet already threatens human society's ability to coherently consider it's future. Nearly every AI scare story (Musk, Bostrom, etc) talks about the threat of thinking-AI while ignoring the increasing and immediate impact of "dumb-AI" as an extension of the blind data driven logic of today.
"Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions."
Which seems to me a disappointing end to the article. An appeal for a national effort to manage AI.
"Other countries have major AI projects" but what exactly should the US model itself after?
"The United States has not yet systematically explored it's scope" but US publishes the second most research papers on AI (https://www.timeshighereducation.com/data-bites/which-countr...) which I think in general is a bad metric but if you're looking at the output that a system produces, it's the best metric you're going to get.
Looking at the private sector, the AI makeup of the leading AI/Robotics ETF $BOTZ (https://www.globalxfunds.com/funds/botz/) is comprised of mostly American companies. Similarly, look at the sheer number of ML/AI startups in SV.
So I fail to see where the crisis is. If American universities are among the leaders in AI research and American companies are among the leaders in the AI economy, why is there such a tone of urgency in this article? Kissinger's argument leads me to believe that he's advocating for a blanket "AI" initiative but he doesn't have a clear idea of what he wants this initiative to do. Without a clear direction for how he wants the US to be "relating AI to humanistic traditions", whatever he's proposing here is just the marriage of shallow musings about the consequences of AI and some kind of blind belief in federal government initiative.
I love dark irony first thing in the morning...
He won the Nobel Peace Prize for ending the Vietnam War. Only in the 90's we learned that he and Nixon torpedoed the '68 Vietnam peace talks under Johnson to make Johnson look bad before the election.
This is public information. We know that part of what Nixon was looking for during the Watergate break ins was evidence that Johnson kept about this. Yet publications like The Atlantic still allow this criminal Kissinger to publish.
In the Price of Power (1983), Seymour Hersh revealed Henry Kissinger—then Johnson’s adviser on Vietnam peace talks secretly alerted Nixon’s staff that a truce was imminent.
According to Hersh, Nixon “was able to get a series of messages to the Thieu government [of South Vietnam] making it clear that a Nixon presidency would have different views on peace negotiations.”
Johnson was livid. He even called the Republican Senate Minority Leader, Everett Dirksen, to complain that “they oughtn’t be doing this. This is treason.”
“I know,” was Dirksen’s feeble reply.
Johnson blasted Nixon about this on November 3rd, just prior to the election. As Robert Parry of Consortiumnews.com has written: “when Johnson confronted Nixon with evidence of the peace-talk sabotage, Nixon insisted on his innocence but acknowledged that he knew what was at stake.”
Said Nixon: “My, I would never do anything to encourage….Saigon not to come to the table….Good God, we’ve got to get them to Paris or you can’t have peace.”
But South Vietnamese President General Theiu—a notorious drug and gun runner—did boycott Johnson’s Paris peace talks. With the war still raging, Nixon claimed a narrow victory over Humphrey. He then made Kissinger his own national security adviser.
In the four years between the sabotage and what Kissinger termed “peace at hand” just prior to the 1972 election, more than 20,000 US troops died in Vietnam. More than 100,000 were wounded. More than a million Vietnamese were killed.
It's clearly not ad-hominem because the comment is not rejecting the claims of the article, only expressing his opinion about the author.
On the other hand, I have to agree with the criticism about the tone that other comment express. It never improves the debate to wish somebody dead.
After all, both, the article and the comment, are talking about the dangers of power unrestricted by ethics.