
Elon Musk: Mark Zuckerberg’s Understanding of AI Is “limited” - rbanffy
https://arstechnica.com/gadgets/2017/07/elon-musk-mark-zuckerberg-artificial-intelligence/
======
x2398dh1
No one has the right to presume that the future will be similar to the past,
neither Elon Musk, Mark Zuckerberg or Nostradamus.

The definition of AI in public discussion has become so loose that it's
basically a religious war. AI can mean anything from, "deep learning enabled
by neural networks on graphical processing cards," to "software tools that
seem intelligent in some way," to literal Lt. Commander Data type technology.

What does anyone mean when they accuse anyone of having a dim understanding of
a poorly defined concept? To may they are basically saying, "look at me, pay
attention to me and my companies."

------
arkitaip
Musk is arrogant to think he, or anyone else for that matter, can speak
authoritatively on AI matters. He has done zero (scientific) research in the
area and tries to see so far into the future that it's ridiculous. The same
goes for Zuckerberg but at least he isn't claiming to be more right than
anyone else.

~~~
52-6F-62
But he did co-found an organization to research AI with the fellows who run
the organization we're discussing this on.

[https://openai.com/about/](https://openai.com/about/)

~~~
arkitaip
That doesn't make him automatically right on AI matters when all he is doing
is idle speculation.

~~~
rbanffy
It makes it very likely he's well informed.

------
jsonderulio
He's not passing legislation with Musk Decree.

Just trying to open the floor to discussion, ground rules, and humanism before
its too late.

Zuckerburg's view of AI extends into the next fiscal quarter, Musk's extends
to other planets.

------
norea-armozel
I'm not sure why people continue to persist in the idea that AI will magically
leap to human and super-human levels of intellect when we have a hard time
engineering them to see patterns that it seems the average person can
recognize with little training. I'm not suggesting that no such AI can exist
but I'm suggesting that we're not at the point of comprehending the components
that goes into such an AI (because I think we'll never directly create it but
rather build its foundations which then will converge on their own accord and
probably by accident).

In any case, I think the key legal problems of current AI and machine learning
by extension are related to privacy. We've let such services invade various
parts of our lives and we have no legal barriers sufficient to ensure no one's
civil rights are violated as a result. So, let's work on those issues before
we worry about making the next Colossus or Ultron.

~~~
noir_lord
I think the worry is that at some point someone will crack a self-improving
general AI and at that point the time between "we did it" and "oh shit" could
be on a human timescale very very short.

Given the potential downsides are possibly extreme (however unlikely) I'm
happy that people are paying attention.

I think a lot of "AI" is hype and hyperbole and largely those people don't
worry me that much, its the quiet ones doing fundamental research that worry
me a bit more.

You are talking about a breakthrough with a threat level in the same magnitude
as nuclear weapons, dinosaur killing asteroids etc.

Personally if I had to put my money down I'd say I wouldn't see a human-level
general intelligence in my life time but a self-improving/learning
intelligence doesn't have to be human level to be very very dangerous either.

------
wonder_bread
Who needs Mayweather Vs. McGregor news when we get to have this

------
rdlecler1
It seems like Rodney Brooks was saying the same thing to Musk last week. Musk
is out of his sandbox here. The latest iteration of AI gives us a nice
categorization and pattern matching tool which has certain ethical
consequences, but when I hear Musk talk about it it seems like he thinks that
hard AI is just around the corner. I haven't seen anything to suggest that it
is. We don't even have AI as smart as a housefly.

~~~
rbanffy
It's not really relevant whether hard AI is or is not just around the corner
if it's able to self-improve and set its own goals at a pace we can't follow
or understand. The probability of ir happening this afternoon is close to
zero, but the probability of it having happened before the end of the century
is very close to 1. This means it's possible there will be no meatware
celebrating the turn of the century.

~~~
rdlecler1
It does matter because we have no idea what the world will look like when we
do get to a point where we know it's 10-20 years away. Right now we're not
even close to there yet and so it's mostly fear mongering.

