> Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be.
I think Nick Bostrom had the perfect reply to that in Superintelligence: Paths, Dangers, Strategies:
> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
It would be extremely strange if we were near the smartest possible minds. Just look at the evidence: Our fastest neurons send signals at 0.0000004c. Our working memory is smaller than a chimp's.[1] We need pencil and paper to do basic arithmetic. These are not attributes of the pinnacle of possible intelligences.
Even if you think it's likely that we are near the smartest possible minds, consider the consequences of being wrong: The AI becomes much smarter than us and potentially destroys everyone and everything we care about. Unless you are supremely confident in humanity's intelligence, you should be concerned about AI risk.
There are savants who can do amazing feats of mental arithmetic, yet have severe mental disabilities in other areas. Perhaps there are some fundamental limits and trade-offs involved? We don't know yet whether computers will be able to break through those limits.
The humans who are in charge of our current society by virtue of having the most wealth, political power, or popularity aren't necessarily the smartest or quickest thinkers, at least not in the way that most AGI researchers seem to be targeting. So even if someone manages to build a real AGI there's no reason to expect it will end up ruling us.
>There are savants who can do amazing feats of mental arithmetic, yet have severe mental disabilities in other areas.
There are also geniuses who can do amazing feats of mental arithmetic and have no severe mental disabilities in any other areas. There are people who have perfect memories (and usually, they live totally normal lives but for the inability to forget a single event). There are people who are far better than average at recognizing faces, pattern recognition, and other tasks that are non-conscious.
The notion that there are "fundamental limits" that are for some reason near the typical human is the cognitive bias of the just-world fallacy. The world is not just. Bad things happen to good people. Good things happen to bad people. If there is a fundamental limit to how smart a mind can get, it's very, very far above the typical human, because there are exceptional humans that are that far away and there's no reason to think a computer couldn't beat them either. Deep Blue beat Kasparov.
>The humans who are in charge of our current society by virtue of having the most wealth, political power, or popularity aren't necessarily the smartest or quickest thinkers, at least not in the way that most AGI researchers seem to be targeting.
I'm not at all familiar with AGI research, but while there are people who start out ahead, there's no reason to think that you can't actually play the game and win it. Winning challenges of acquiring wealth or political power or popularity are related to intelligence, insofar as the concept of intelligence is defined (as it is at least in psychology) as the ability to accomplish general goals.
Walking into a meeting of the Deutsche Arbeiterpartei, joining as the 55th member, and later seizing control of the state requires intelligence. Landing in a dingy yacht with 81 men, bleeding a regime to death, and ruling it until your death requires intelligence. Buying the rights to the Quick and Dirty Operating System, licensing it to IBM, and becoming the de facto standard OS for all consumer computing requires intelligence. Presenting yourself as a folksy Texan when you're a private-school elite from Connecticut and convincing the electorate that you'd have a beer with them requires intelligence. All of these outcomes are goals, and accomplishing them demonstrates an ability to actually put the rubber to the road.
You're confusing the technical term of intelligence, which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence" which means something along the lines of "ability to impress people." Intelligence is not always impressive, and humans have a fantastic ability to write off the accomplishments of others when they want to make the world appear more just. I mean, nobody feels good about the fact that we're one Napoleon away from a totally different world system that may or may not suit our interests. The belief that the world is somehow fundamentally different to the extent that the next Napoleon-class intelligence who isn't content managing a hedge fund and just increasing an int in a bank database can't actually redraw the world map, is just an illusion we tell ourselves to make the world we live in more fair, the stories we live out more meaningful, and the events of our lives have a little bit more importance.
AI could change things very, very drastically. It's right to be afraid.
> There are also geniuses who can do amazing feats of mental arithmetic and have no severe mental disabilities in any other areas.
And do they seem to hold any sort of significant power? It seems that intelligence is not a very good means for achieving power. Charm seems much more effective, and therefore dangerous. I'd be afraid of super-human charm much more than super-human intelligence.
Mental disabilities aside, very high IQ seems to be correlated with relatively low charm and low ability to solve particular problems (how to get people to do what you want) that are far more dangerous than the kind of problem-solving intelligence (or how we commonly define it) is capable of solving. "Super" intelligent people are terrible problem-solvers when the problems involve other humans.
Ironically, the fact that people like me view the AI-scare as a religious apocalypse that is as threatening as any other religious apocalypse implies one of two things: 1) that the people promoting the reality of this apocalypse are not as intelligent as they believe themselves to be (a real possibility given their limited understanding of both intelligence and our real achievements in the field of AI) and/or that 2) intelligent people are terrible at convincing others, and so don't pose much of a risk.
Either possiblity shows that super-human AI is a non-issue, certainly not at this point in time. As someone said (I don't remember who), we might as well worry about over-population on mars.
What's worse is that machine learning poses other, much more serious and much more imminent threats than super-human intelligence, such as learned biases, which are just one example of conservative feedback-loops (the more we rely on data and shape our actions accordingly, the more the present dynamics reflected in the data ensure that they don't change).
>It seems that intelligence is not a very good means for achieving power. Charm seems much more effective, and therefore dangerous. I'd be afraid of super-human charm much more than super-human intelligence.
See these paragraphs in the post to which you replied:
>Walking into a meeting of the Deutsche Arbeiterpartei, joining as the 55th member, and later seizing control of the state requires intelligence. Landing in a dingy yacht with 81 men, bleeding a regime to death, and ruling it until your death requires intelligence. Buying the rights to the Quick and Dirty Operating System, licensing it to IBM, and becoming the de facto standard OS for all consumer computing requires intelligence. Presenting yourself as a folksy Texan when you're a private-school elite from Connecticut and convincing the electorate that you'd have a beer with them requires intelligence. All of these outcomes are goals, and accomplishing them demonstrates an ability to actually put the rubber to the road.
>You're confusing the technical term of intelligence, which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence" which means something along the lines of "ability to impress people." Intelligence is not always impressive, and humans have a fantastic ability to write off the accomplishments of others when they want to make the world appear more just. I mean, nobody feels good about the fact that we're one Napoleon away from a totally different world system that may or may not suit our interests. The belief that the world is somehow fundamentally different to the extent that the next Napoleon-class intelligence who isn't content managing a hedge fund and just increasing an int in a bank database can't actually redraw the world map, is just an illusion we tell ourselves to make the world we live in more fair, the stories we live out more meaningful, and the events of our lives have a little bit more importance.
>Ironically, the fact that people like me view the AI-scare as a religious apocalypse that is as threatening as any other religious apocalypse implies one of two things:
Some alternative explanations:
3. You're wrong, and super-human AI is a massive issue, regardless of how you pattern-match it as "religious"
4. You're not creative enough (one might dare say "intelligent" but that would be too snarky) to imagine all the ways in which an AI could devastate humanity without it having much intelligence or much charm
5. Your view of the world is factually incorrect, I mean you believe things like:
>Mental disabilities aside, very high IQ seems to be correlated with relatively low charm and low ability to solve particular problems (how to get people to do what you want) that are far more dangerous than the kind of problem-solving intelligence (or how we commonly define it) is capable of solving. "Super" intelligent people are terrible problem-solvers when the problems involve other humans.
Let's assume that IQ is a good proxy for intelligence (it isn't): what IQ do you think Bill Gates or Napoleon or Warren Buffet or Karl Rove have? What IQ do you think Steve Jobs or Steve Ballmer had/have? Do you think they're just "average" or just not "very high"?
This:
>very high IQ seems to be correlated with relatively low charm
is again the just world fallacy! There is no law of the universe that makes people very good at abstract problem solving bad at social situations. In fact, most hyper-successful people are almost certainly good at both.
And that ignores the fact that cognitive biases DO exist, and it's very possible to apply the scientific method and empirical problem solving to finding them, and then exploiting humans that way. This is a huge subfield of psychology (persuasion) and the basis of marketing. Do you think it takes some super-special never-going-to-be-replicated feat of non-Turing-computable human thought to write Zynga games?
It's nice to think the world is a safe place, but the reality is that our social order is increasingly precarious and an AI could easily disrupt that.
> which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence"
Perhaps. But if that is the case, the people who are most intelligent by this definition are far from the ones recognized as intelligent by the AI-fearing community. Let me put it this way: Albert Einstein and Richard Feynman would not be among them. Adolf Hitler, on the other hand, would be a genius.
> You're wrong, and super-human AI is a massive issue, regardless of how you pattern-match it as "religious"
If you think I don't always presume that everything I say is likely wrong then you misunderstand me. I do, however, don't understand what you mean by "a massive issue". Do you mean imminent danger? Yes, I guess it's possible, but being familiar with the state of the art, I can at least discount the "imminent" part.
> You're not creative enough (one might dare say "intelligent" but that would be too snarky) to imagine all the ways in which an AI could devastate humanity without it having much intelligence or much charm
I can imagine many things. I can even imagine an alien race destroying our civilization tomorrow. What I fail to see is compelling arguments why AI is any more dangerous or any more imminent than hundreds of bigger, more imminent threats.
> In fact, most hyper-successful people are almost certainly good at both.
I would gladly debate this issue if I believed you genuinely believed that. If you had a list ordered by social power of the top 100 most powerful people in the world, I doubt you would say their defining quality is intelligence.
> it's very possible to apply the scientific method and empirical problem solving to finding them, and then exploiting humans that way. This is a huge subfield of psychology (persuasion) and the basis of marketing.
Psychology is one of the fields I know most about, and I can tell you that the people most adept at exploiting others are not the ones you would call super-intelligent. You wouldn't say they are of average intelligence, but I don't think you'd recognize their intelligence as being superior.
> It's nice to think the world is a safe place, but the reality is that our social order is increasingly precarious and an AI could easily disrupt that.
There are so many things that could disrupt that, and while AI is one of them, it is not among the top ten.
>Perhaps. But if that is the case, the people who are most intelligent by this definition are far from the ones recognized as intelligent by the AI-fearing community. Let me put it this way: Albert Einstein and Richard Feynman would not be among them. Adolf Hitler, on the other hand, would be a genius.
How so? Feynman in particular was quite able to continually accomplish his goals, and he purposely chose divergent goals to test himself (his whole "I'll be a biologist this summer" thing).
And yes, see my original comment re: it takes intelligence to walk into the DAP meeting and join as member 55 and come out conquering mainland Europe.
>I do, however, don't understand what you mean by "a massive issue". Do you mean imminent danger? Yes, I guess it's possible, but being familiar with the state of the art, I can at least discount the "imminent" part.
The state of the art is irrelevant here; in particular, most of AI seems to be moving in the direction of "use computers to emulate human neural hardware and use massive amounts of training data to compensate for the relative sparseness of the artificial neural networks."
What's imminently dangerous about AI is that all it really takes is a few innovations that might be in seemingly unrelated areas enable probably several people who see the pattern to go and implement AI. This is how most innovation happens, but here it could be very dangerous, because...
>What I fail to see is compelling arguments why AI is any more dangerous or any more imminent than hundreds of bigger, more imminent threats.
AI could totally destabilize our society in a matter of hours. Our infrastructure is barely secure against human attackers, and it could be totally obliterated by an AI that chose to do that, or incidentally caused it to happen. An AI might not be able to launch nukes directly (in the US at least, who knows what the Russians have hooked up to computers), but it could almost certainly make it seem to any nuclear power that another nuclear power had launched a nuclear attack. There actually are places that will just make molecules you send them, so if the AI figures out protein folding, it could wipe out humanity with a virus.
AI is more dangerous than most things, because it has:
* limitless capability for action
* near instantaneous ability to act
The second one is really key; there's nearly nothing that would make shit hit the fan FASTER than a hostile AI.
If you have a list of hundreds of bigger, more imminent threats, that can take humanity from 2015 to 20000BCE in a day, I'd like to see it.
>I doubt you would say their defining quality is intelligence.
I'm confused as to how you can read three comments of "intelligence is the ability to accomplish goals" and then say "people who have chosen to become politically powerful and accomplished that goal must not be people you consider intelligent."
>You wouldn't say they are of average intelligence, but I don't think you'd recognize their intelligence as being superior.
Well, they can exploit people. How's that for superiority?
My background is admittedly in cognitive psychology, not clinical, but I do see your point here. I'd like to make two distinctions:
* A generally intelligent person (say, Feynman) could learn to manipulate people and would almost certainly be successful at it
* People that are most adept at manipulating people, usually are that way because that's the main skill they've trained themselves for over the course of their lives.
>it is not among the top ten.
Of the top ten, what would take less than a week to totally destroy our current civilization?
> Feynman in particular was quite able to continually accomplish his goals, and he purposely chose divergent goals to test himself (his whole "I'll be a biologist this summer" thing).
His goals pertained to himself. He never influenced the masses and never amassed much power.
> it takes intelligence to walk into the DAP meeting and join as member 55 and come out conquering mainland Europe.
I didn't say it doesn't, but it doesn't take super intelligence to do that. Just more than a baseline. Hitler was no genius.
> What's imminently dangerous about AI is that all it really takes is a few innovations that might be in seemingly unrelated areas enable probably several people who see the pattern to go and implement AI.
That could be said just about anything. A psychologist could accidentally discover a fool-proof mechanism of brainwashing people; a microbiologist could discover an un-killable deadly microbe; an archeologist could uncover a dormant spaceship from a hostile civilization. There's nothing that shows that such breakthroughs in AI are any more imminent than in other fields.
> Our infrastructure is barely secure against human attackers, and it could be totally obliterated by an AI that chose to do that
Why?
> but it could almost certainly make it seem to any nuclear power that another nuclear power had launched a nuclear attack
Why can an AI do that but a human can't?
> limitless capability for action
God has limitless capability for action. But we have no reason whatsoever to believe that either God or true AI would reveal themselves in the near future.
> near instantaneous ability to act
No. Again,
> there's nearly nothing that would make shit hit the fan FASTER than a hostile AI.
There's nothing that would make shit hit the fan FASTER than a hostile spaceworm devouring the planet. But both the spaceworm and the AI are currently speculative sci-fi.
> I'm confused as to how you can read three comments of "intelligence is the ability to accomplish goals"
There are a couple of problems with that: one, that is not the definition that is commonly used today. Britney Spears has a lot of ability to achieve her goals, but no one would classify her as especially intelligent. Two, that is not where AI research is going. No one is trying to make computers able to "achieve goals", but able to carry out certain computations. Those computations are very loosely correlated with actual ability to achieve goals. You could define intelligence as "the ability to kill the world with a thought" and then say AI is awfully dangerous, but that definition alone won't change AI's actual capabilities.
> A generally intelligent person (say, Feynman) could learn to manipulate people and would almost certainly be successful at it
I disagree. We have no data to support that prediction. We know that manipulation requires intelligence, but we do not know that added intelligence translates to added ability to manipulate and that that relationship scales.
> what would take less than a week to totally destroy our current civilization?
That is a strange question, because you have no idea how long it would take an AI. I would say that whatever an AI could achieve in a week, humans could achieve in a similar timeframe and much sooner. In any case, as someone who worked with neural-networks in the nineties, I can tell you that we haven't made as much progress as you think. We are certainly not at any point where a sudden discovery could yield true AI any more than a sudden discovery would create an unkillable virus.
The AI becomes much smarter than us and potentially destroys everyone and everything we care about. Unless you are supremely confident in humanity's intelligence, you should be concerned about AI risk.
I acknowledge it as a real risk, but it's not terribly high on my personal list of things to worry about right now. I like what Andrew Ng said about how worrying about this now is "like worrying about over-population on Mars".
Please notice that your reply is a different argument than the one you first put forth. Originally, you weren't worried about AI because you thought it could never –even in principle– vastly exceed human abilities. Now you're basically saying, "I don't need to worry because it won't happen for a long time." That is a huge amount of ground to cede.
I'm not so confident that human-level AI will take a long time. The timeline depends on algorithmic insights, which are notoriously difficult to predict. It could be a century. It could be a decade. Still, it seems like something worth worrying about.
Please notice that your reply is a different argument than the one you first put forth. Originally, you weren't worried about AI because you thought it could never –even in principle– vastly exceed human abilities.
I never said I wasn't worried about AI. You're extrapolating from what I did say; which I've said all along was just a thought experiment, not a position I'm actually arguing for.
I really recommend you read Bostrom, he really does succinctly argue the relevant positions, if a bit drily.
It's one of those books that put the arguments so clearly you're suddenly catapulted to having a vastly superior knowledge of the subject than someone trying to do simple thought experiments.
Both of your arguments look out-dated if you're one of the people 'in the know'.
Also, I suggest looking a bit more into what's going on in machine learning, it's suddenly got far more sophisticated than I personally realized until a couple of months ago when I was chatting to someone developing in it at the moment.
Or like worrying about global warming back when it would have been easier to prevent?
Ng's statement is, at best, equivalent to a student who is putting off starting their semester project until finals week. Yes it seems far away, but the future is going to happen.
I don't know. I mean, we don't seem to even be close to actually beginning to colonize Mars, much less be close to the point of overpopulation. I think Ng's statement, formed in an analogy similar to yours, would be closer to
"a freshman student who is putting off studying for his Senior final project until his Senior year".
The question Ng asked was something like "is there any practical action we can take today, to address over-population on mars" as an analogy to "is there any practical step we can take today to address the danger of a super-AGI". And honestly, I'm not convinced there is anything practical to do about super-AGI today. Well, nothing besides pursuing the "open AI" strategy.
But I'm willing to be convinced otherwise if somebody has a good argument.
> The AI becomes much smarter than us and potentially destroys everyone and everything we care about.
What makes you think we humans won't attempt to do even more harm towards humanity? Maybe the AI will save us from ourselves, and, being so much smarter, might guide us towards our further evolution.
I think Nick Bostrom had the perfect reply to that in Superintelligence: Paths, Dangers, Strategies:
> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
It would be extremely strange if we were near the smartest possible minds. Just look at the evidence: Our fastest neurons send signals at 0.0000004c. Our working memory is smaller than a chimp's.[1] We need pencil and paper to do basic arithmetic. These are not attributes of the pinnacle of possible intelligences.
Even if you think it's likely that we are near the smartest possible minds, consider the consequences of being wrong: The AI becomes much smarter than us and potentially destroys everyone and everything we care about. Unless you are supremely confident in humanity's intelligence, you should be concerned about AI risk.
1. https://www.youtube.com/watch?v=zsXP8qeFF6A