Hacker News new | past | comments | ask | show | jobs | submit login

> which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence"

Perhaps. But if that is the case, the people who are most intelligent by this definition are far from the ones recognized as intelligent by the AI-fearing community. Let me put it this way: Albert Einstein and Richard Feynman would not be among them. Adolf Hitler, on the other hand, would be a genius.

> You're wrong, and super-human AI is a massive issue, regardless of how you pattern-match it as "religious"

If you think I don't always presume that everything I say is likely wrong then you misunderstand me. I do, however, don't understand what you mean by "a massive issue". Do you mean imminent danger? Yes, I guess it's possible, but being familiar with the state of the art, I can at least discount the "imminent" part.

> You're not creative enough (one might dare say "intelligent" but that would be too snarky) to imagine all the ways in which an AI could devastate humanity without it having much intelligence or much charm

I can imagine many things. I can even imagine an alien race destroying our civilization tomorrow. What I fail to see is compelling arguments why AI is any more dangerous or any more imminent than hundreds of bigger, more imminent threats.

> In fact, most hyper-successful people are almost certainly good at both.

I would gladly debate this issue if I believed you genuinely believed that. If you had a list ordered by social power of the top 100 most powerful people in the world, I doubt you would say their defining quality is intelligence.

> it's very possible to apply the scientific method and empirical problem solving to finding them, and then exploiting humans that way. This is a huge subfield of psychology (persuasion) and the basis of marketing.

Psychology is one of the fields I know most about, and I can tell you that the people most adept at exploiting others are not the ones you would call super-intelligent. You wouldn't say they are of average intelligence, but I don't think you'd recognize their intelligence as being superior.

> It's nice to think the world is a safe place, but the reality is that our social order is increasingly precarious and an AI could easily disrupt that.

There are so many things that could disrupt that, and while AI is one of them, it is not among the top ten.




>Perhaps. But if that is the case, the people who are most intelligent by this definition are far from the ones recognized as intelligent by the AI-fearing community. Let me put it this way: Albert Einstein and Richard Feynman would not be among them. Adolf Hitler, on the other hand, would be a genius.

How so? Feynman in particular was quite able to continually accomplish his goals, and he purposely chose divergent goals to test himself (his whole "I'll be a biologist this summer" thing).

And yes, see my original comment re: it takes intelligence to walk into the DAP meeting and join as member 55 and come out conquering mainland Europe.

>I do, however, don't understand what you mean by "a massive issue". Do you mean imminent danger? Yes, I guess it's possible, but being familiar with the state of the art, I can at least discount the "imminent" part.

The state of the art is irrelevant here; in particular, most of AI seems to be moving in the direction of "use computers to emulate human neural hardware and use massive amounts of training data to compensate for the relative sparseness of the artificial neural networks."

What's imminently dangerous about AI is that all it really takes is a few innovations that might be in seemingly unrelated areas enable probably several people who see the pattern to go and implement AI. This is how most innovation happens, but here it could be very dangerous, because...

>What I fail to see is compelling arguments why AI is any more dangerous or any more imminent than hundreds of bigger, more imminent threats.

AI could totally destabilize our society in a matter of hours. Our infrastructure is barely secure against human attackers, and it could be totally obliterated by an AI that chose to do that, or incidentally caused it to happen. An AI might not be able to launch nukes directly (in the US at least, who knows what the Russians have hooked up to computers), but it could almost certainly make it seem to any nuclear power that another nuclear power had launched a nuclear attack. There actually are places that will just make molecules you send them, so if the AI figures out protein folding, it could wipe out humanity with a virus.

AI is more dangerous than most things, because it has:

* limitless capability for action

* near instantaneous ability to act

The second one is really key; there's nearly nothing that would make shit hit the fan FASTER than a hostile AI.

If you have a list of hundreds of bigger, more imminent threats, that can take humanity from 2015 to 20000BCE in a day, I'd like to see it.

>I doubt you would say their defining quality is intelligence.

I'm confused as to how you can read three comments of "intelligence is the ability to accomplish goals" and then say "people who have chosen to become politically powerful and accomplished that goal must not be people you consider intelligent."

>You wouldn't say they are of average intelligence, but I don't think you'd recognize their intelligence as being superior.

Well, they can exploit people. How's that for superiority?

My background is admittedly in cognitive psychology, not clinical, but I do see your point here. I'd like to make two distinctions:

* A generally intelligent person (say, Feynman) could learn to manipulate people and would almost certainly be successful at it

* People that are most adept at manipulating people, usually are that way because that's the main skill they've trained themselves for over the course of their lives.

>it is not among the top ten.

Of the top ten, what would take less than a week to totally destroy our current civilization?


> Feynman in particular was quite able to continually accomplish his goals, and he purposely chose divergent goals to test himself (his whole "I'll be a biologist this summer" thing).

His goals pertained to himself. He never influenced the masses and never amassed much power.

> it takes intelligence to walk into the DAP meeting and join as member 55 and come out conquering mainland Europe.

I didn't say it doesn't, but it doesn't take super intelligence to do that. Just more than a baseline. Hitler was no genius.

> What's imminently dangerous about AI is that all it really takes is a few innovations that might be in seemingly unrelated areas enable probably several people who see the pattern to go and implement AI.

That could be said just about anything. A psychologist could accidentally discover a fool-proof mechanism of brainwashing people; a microbiologist could discover an un-killable deadly microbe; an archeologist could uncover a dormant spaceship from a hostile civilization. There's nothing that shows that such breakthroughs in AI are any more imminent than in other fields.

> Our infrastructure is barely secure against human attackers, and it could be totally obliterated by an AI that chose to do that

Why?

> but it could almost certainly make it seem to any nuclear power that another nuclear power had launched a nuclear attack

Why can an AI do that but a human can't?

> limitless capability for action

God has limitless capability for action. But we have no reason whatsoever to believe that either God or true AI would reveal themselves in the near future.

> near instantaneous ability to act

No. Again,

> there's nearly nothing that would make shit hit the fan FASTER than a hostile AI.

There's nothing that would make shit hit the fan FASTER than a hostile spaceworm devouring the planet. But both the spaceworm and the AI are currently speculative sci-fi.

> I'm confused as to how you can read three comments of "intelligence is the ability to accomplish goals"

There are a couple of problems with that: one, that is not the definition that is commonly used today. Britney Spears has a lot of ability to achieve her goals, but no one would classify her as especially intelligent. Two, that is not where AI research is going. No one is trying to make computers able to "achieve goals", but able to carry out certain computations. Those computations are very loosely correlated with actual ability to achieve goals. You could define intelligence as "the ability to kill the world with a thought" and then say AI is awfully dangerous, but that definition alone won't change AI's actual capabilities.

> A generally intelligent person (say, Feynman) could learn to manipulate people and would almost certainly be successful at it

I disagree. We have no data to support that prediction. We know that manipulation requires intelligence, but we do not know that added intelligence translates to added ability to manipulate and that that relationship scales.

> what would take less than a week to totally destroy our current civilization?

That is a strange question, because you have no idea how long it would take an AI. I would say that whatever an AI could achieve in a week, humans could achieve in a similar timeframe and much sooner. In any case, as someone who worked with neural-networks in the nineties, I can tell you that we haven't made as much progress as you think. We are certainly not at any point where a sudden discovery could yield true AI any more than a sudden discovery would create an unkillable virus.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: