
Experts pledge to rein in AI research - ohaal
http://www.bbc.com/news/technology-30777834
======
bsaunder
"In the short term, this could mean research into the economic effects of AI
to stop smart systems putting millions of people out of work."

This seems unfortunate and somewhat challenging to do. Our current economic
model encourages improving efficiency of systems. This seems like a good
thing. Its really too bad that people "need" jobs. Jobs should be creating
value or they shouldn't exist. Artificially "creating" jobs to prop up the
systems feels like fighting against reality and a bad long term plan.

~~~
bulletsvshumans
How about a tax on AI that goes towards unemployment benefits of some kind?

~~~
3beard
I don't understand why AI eliminating jobs is considered a problem. If any job
can be done by a computer program it simply means that humanity has outgrown
this kind of work and that nobody should do that mindnumbing crap anymore.
Let's forget that the word "computer" used to mean "guy who sits in an office
of an accounting firm, and adds numbers all day long".

~~~
AlisdairO
That's fantastic, as long as there are alternative jobs for those people to go
to that they can feasibly perform. This has historically been the case with
technology replacing work, but it's not clear whether it's the case this time.

If there's no (or insufficient) jobs for people to go to, in a society that
predicates your ability to live a pleasant life on having one, you have a huge
social problem. Our current system is not set up to cope with 40% of the
population being out of work.

------
amelius
Why not do the same with all of technology?

Why do shareholders of big corporations profit from science in a grossly non-
proportional way, while more than 50% of world's population has to live on
under $2 a day?

It is time for the world's greatest minds to start thinking about how to fix
capitalism, because it seems to be seriously broken.

And we need it fixed more than we need e.g. iPhone 7.0, or Google Adwords 2.0.

~~~
blfr
Coming from a formerly communist country, I have a severe distrust towards
"great minds" with some radical fix for capitalism.

Actually, we should start treating social fixes like we do technical rollouts.
Prove it on a small scale somewhere first, and then carefully expand.

This seems like common sense. Yet for some reason changes in policies tend to
be sweeping, national or even international.

~~~
kylebrown
Clinical medicine is a better analogy than technical rollouts. But you're
right, modern development economics argues that social fixes (policies) should
be empirically driven using evidence-based randomized control trials.

Esther Duflo has pushing this approach at the MIT Poverty Action Lab. The main
insight is to reject grand generalizations and broad theorizing. Sitting in an
armchair pondering how to "fix capitalism" is unlikely to lead to useful lines
of thought. Society is too complex a system, cause & effect relationships can
be highly localized and context-dependent; formulaic thinking just leads one
down ideological rabbit holes.

------
maaaats
I wonder: Why should we care about what Musk and Hawking mean about AI? This
article doesn't mention too much bad stuff, but earlier they have said that we
should be afraid of AI/singularity.

With my thesis now in AI, I probably know far more than those two about this.
And we're sooo far away from AI being a superforce destroying mankind.

~~~
rndn
You can probably assume they are in contact with the leading experts in the
field. From the letter:

    
    
        The initial version of this document was drafted by
        Stuart Russell, Daniel Dewey & Max Tegmark, with major
        input from Janos Kramar & Richard Mallah, and reflects
        valuable feedback from Anthony Aguirre, Erik
        Brynjolfsson, Ryan Calo, Tom Dietterich, Dileep George,
        Bill Hibbard, Demis Hassabis, Eric Horvitz, Leslie Pack
        Kaelbling, James Manyika, Luke Muehlhauser, Michael
        Osborne, David Parkes, Heather Roff Perkins, Francesca
        Rossi, Bart Selman, Murray Shanahan, and many others.

------
Beltiras
AI would, and should, succeed human intelligence properly implemented. We are
nowhere near even understanding it as a problem, much less solving it. I
attended an AGI conference a couple of years ago (summer of 2012 iirc). The
general feeling was that we are still a lifetime away from a solution.

------
rndn
This was discussed two days ago:
[https://news.ycombinator.com/item?id=8870456](https://news.ycombinator.com/item?id=8870456)

------
faizshah
I don't know much about the philosophy of AI and I'm only familiar at a basic
level with modern AI algorithms. From what I have been exposed to I don't see
any reason to think AI is any more than a set of statistical frameworks. Is
there any reason to believe that these statistical frameworks are comparable
to biological intelligence?

Am I thinking about this the wrong way?

~~~
harshreality
It sounds like you're talking about the kinds of AI in use today? That's not
what the cautions are about, since current "AI" however good at reading
individual words or flying drones is not yet capable of human-level thought.
The cautions are about trans-sapient AI, which doesn't exist yet. Even if it's
simply a beefed up "set of statistical frameworks" linked in the right way to
get a computer behave like a human, humans develop and use nuclear weapons,
humans go on shooting sprees, humans decide to go to war...

~~~
faizshah
I see, I didn't think about that. So you mean AI that are not necessarily
intelligent or conscious but are just more integrated in our lives, for
example in the military or law enforcement?

I can see how a bug in those kinds of systems could cause things like that
without necessarily being conscious or intelligent.

~~~
Swizec
The implication is that humans are just statistical algorithms. We have so far
not found any evidence to the contrary.

------
walterbell
> _Research into AI, using a variety of approaches, had brought about great
> progress on speech recognition, image analysis, driverless cars, translation
> and robot motion, it said._

How much of this progress required training data generated by working humans?
What would feed future statistical algorithms if this source of training data
was greatly reduced?

------
brador
So long as we can pull the plug or disconnect the interfaces we'll be ok with
AI. Once we can't, then we have a problem.

In effect the scariest AI is distributed, self propogating and can't be
unpowered. Effectively a virus. I have yet to see a meaningful distributed AI,
even in concept.

~~~
Cakez0r
But it's not inconceivable that in the near-future an AI could train itself to
mutate (even if this means interfacing with mechanical turk or freelance
websites and paying humans to do it).

------
saalweachter
I'm glad someone is finally addressing this, what with a major advance in
artificial intelligence being only five or ten years away.

------
sigzero
People make mistakes. That is all.

------
jheriko
This sounds like a publicity stunt...

My real concern with all of this is always the uncontrolled ecosystem of
steadily evolving viruses and malware. We will never have control of that...
and there is no telling what it can become in the future.

I think it will be a simple error induced by some random mutation in one of
these malicious progams, not some vast artificial intelligence, that causes us
problems in this arena first.

~~~
drcomputer
We need more AI working in the domain of computer security. Stuff that learns
under specific guidelines to restrict computational behavior, given
specifications of expected behavior.

~~~
jacquesm
You'll have AI working the other side of the arms race just the same. Tools
aren't just used by the good guys.

~~~
drcomputer
That won't stop the bad guys from making those tools while the good guys
don't, because they are afraid their tools will be used by the bad guys.

Ignorance is not a good defense strategy.

~~~
walterbell
Inadvertent bad guys (good intentions with unpredictable errors) can be more
damaging than intentional attacks.

~~~
drcomputer
Can you cite an example?

~~~
walterbell
Any well-known vulnerability, e.g. heartbleed, gotofail, etc.

~~~
drcomputer
I am genuinely interested in hearing the 'good intention' explanation of these
bugs.

