
Artificial intelligence: Ten things you need to understand - ehudla
http://www.alphr.com/science/1002792/artificial-intelligence-ten-things-you-need-to-understand
======
npalli
The "breakthrough" AI of today is deep learning on massive amounts on data
applied to two areas - speech/NLP and vision. How this generalizes to a super
intelligence that can take over the planet is so strange. Does a child need to
look at billions of images to figure out what a chair or cat is? Will this AI
figure out how to select a good business partner?

The problem is that you ask someone who is good in one field (say electric
cars or theoretical physics) to opine on something like AI. The correct
response is to say that you don't know anything about AI. But, the ego of
being an public intellectual prevents that. So, what is the safest option to
not seeming dumb - say something like we need to make sure safeguards are
place to protect AI from becoming dangerous and kill everyone.

Meanwhile people who actually build these systems know that these systems not
are generalizable to variety of tasks (like humans) and they are not
intelligent. Best case, they augment humans in their tasks.

~~~
_vk_
>Does a child need to look at billions of images to figure out what a chair or
cat is?

Of course! Not exclusively images of cats or chairs, but children have
absolutely seen billions of images by the time they start to exhibit
discernibly human-level intelligence.

~~~
kotach
assuming you see an image every 400ms which is given blinking and activation
of neural pathways a good approximation. Billion images per that rate is
equivalent to 12 years of never stopping to watch

there have been systems that learned to generalize after seeing couple of
examples not thousands (digit recognition)

child can see just one animal and label it as a monkey, an algorithm could
probably do the same with more algorithmic machinery, but we are still not
there

------
PaulHoule
I don't agree with the definition of "evil" used in this article.

Eichmann, for instance, didn't kill the Jews because he was "wicked", he did
it because he was following orders. That's evil enough and he hung for it.

A while back we wargamed the idea of an "Evil Teddy Ruxpin" that would want to
harm you with all its might but wouldn't have much might so it wouldn't be
dangerous. It might be fun to battle with, but we figured it wouldn't be safe
because it could always start a fire.

------
jkoschei
Okay, I'll bite. I'm not particularly well-versed in AI issues, but this
article is of the end-is-nigh variety and HN tends to be a technologically
optimistic community, so I'm hoping someone can debunk this and give us reason
to be optimistic rather than terrified of our future as human batteries in The
Matrix.

Anyone?

~~~
robotkilla
> rather than terrified of our future as human batteries in The Matrix

Humans make terrible batteries, therefore its more likely we'll just be
exterminated.

~~~
manicdee
Especially given how we reproduce without bounds, consume all the resources
available, and rather than dying we simply find other resources to consume to
maintain our steadily growing population.

Agent Smith was right.

------
bjornsing
> 6\. Once artificial intelligence gets smarter than humans, we've got very
> little chance of understanding it

Is that really so...? My gut feeling is that it's probably not. I don't know
exactly where this gut feeling comes from, but I think the underlying
reasoning goes something like this: Richard Feynman was a hell of a lot
smarter than I am, but I can still understand his ideas. Of course an AI could
construct incredibly long mathematical proofs, and similar, that no human
could verify, but that wouldn't be much like the difference between man and
ape. Is there really an entirely different way of understanding the universe
out there, one that is radically more productive than ours? I doubt it.

Another way to put it I guess is: I'm simply not sure the marginal utility of
(raw) intelligence is that great. In fact I remember once telling my friends
that my life would be so much better if I was just a little smarter. It was
meant as a joke.

Yet another way to think about it is to ask what's holding back our
understanding of the universe. I'd say it's not really "intelligence" at all,
but rather "money". Take gravitational waves for instance: Einstein predicted
them some hundred years ago(!) and they where only detected just now, after
spending I don't know how many millions/billions of dollars...

But either way, this is probably one of the most interesting philosophical
questions of our time.

~~~
ehudla
Wasn't it Hofstadter who hypothesized that an intelligent system will not have
access to its own lower levels? I can't put my hands on the exact quote at the
moment. If anyone remembers, I'd appreciate the info.

------
tomg
[https://en.wikipedia.org/wiki/History_of_artificial_intellig...](https://en.wikipedia.org/wiki/History_of_artificial_intelligence#The_optimism)

------
hacker42
> It's entirely possible that the reason we've never met aliens is because
> they invented artificial intelligence before they could build spaceships
> capable of interstellar travel, and that discovery caused their extinction.

This is really not so clear because it would require AIs to _never_ invent
space-travel, measurable large-scale structures or signals, which seems
unlikely (assuming these sort of things are possible in the fist place). If
astronomical evidence of these sorts is physically impossible, then there is
no need for explaining the Great Filter with AI in the first place.

