
Research Priorities for Artificial Intelligence: An Open Letter - ghosh
http://futureoflife.org/misc/open_letter
======
zeendo
The research priorities document should be read first...
[http://futureoflife.org/static/data/documents/research_prior...](http://futureoflife.org/static/data/documents/research_priorities.pdf)

without that the open letter makes almost no sense. The open letter is really
just a way for the signatories to say "I support the research priorities
document."

~~~
Animats
Right. The main author is Stuart Russel, author of "Artificial Intelligence -
a modern approach", which is a major intro to AI text. Two main concerns are
expressed - that AI, whatever it is, should be "beneficial" and "robust".
Those are good points.

There are technical problems on the "robust" side. Most AI systems are still
rather brittle. Machine learning systems are brittle in ways that are not well
understood yet - small changes in input data can produce huge changes in
results. There was an article on HN about this recently, showing mis-
recognized images. The paper talks a lot about formal verification, but it's
not clear that will help. "Robust", though, is an engineering problem, and can
probably be solved.

"Beneficial" is an economic and social problem. That's going to be much
tougher, because the political issue will be "beneficial for whom?" For
discussion purposes, consider that a near-term use of AI is investment asset
allocation. Machine learning is already being used in that area, but so far
mostly for technical analysis. What will a society look like where AIs
determine where investment goes, based purely on maximizing investment return?
The concept of the corporation as having as its sole goal maximizing
shareholder value, combined with AIs making the decisions to make it happen,
is rather scary. Yet if such companies are more successful, they win. That's
capitalism.

~~~
wging
Russell is one author. The other is Peter Norvig.

~~~
davmre
Just to avoid confusion: Norvig is the co-author of AIMA, the AI textbook. The
research priorities document that is the subject of this thread was drafted by
Stuart Russell, Daniel Dewey, and Max Tegmark, though Norvig (along with many
others) is signed on via the open letter.

------
natch
It's hard to design any system (or research program for that matter) if the
goal isn't well defined.

If the research is aimed at ensuring that AI is aligned with human interests,
another priority should be researching what we deem those interests to be.

The Law and Ethics research section of the research priorities document is
centered on mechanisms, with a couple of nods to policy, but with the seeming
assumption that that question of what is in the human interest has already
been answered.

But the answer certainly isn't obvious. Different sets of humans have wildly
diverging and fundamental disagreements over what is right or wrong for humans
and our future.

I doubt we're going to find a consensus in the time we have. It will probably
make more sense to just have smart people define a set of good principles.
This is hard enough that it should be considered a research task, and I would
think it would be at the top of the priority list.

Otherwise, how do we know that the research we are doing on other problems
even makes sense? If you just stub out the "human wishes" component of all
this as one simple "human approves" button, then you haven't solved anything.
We need a robust definition of what "aligned with human interests" means.

~~~
jimrandomh
The current most-widely-accepted answer to this is "indirect normativity".
This is basically saying that human values are complex enough that we should
instruct an AI to study humans to figure out what they are, rather than try to
program values in directly and risk getting them wrong.

I'd be a lot more comfortable if metaethics and related philosophy advanced to
the point where we didn't have to rely on indirect normativity. But I don't
think this is something you can just throw money and people at, because this
is a field which is sufficiently difficult and hard to judge, that work which
isn't first-rate tends to detract from progress by adding noise rather than
help.

~~~
yanazendo
Indirect normativity is a losing strategy because you don't know what the
resultant ethical system will be.
[https://ordinaryideas.wordpress.com/2012/04/21/indirect-
norm...](https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-
write-up/)

Humanity has already solved ethics, so there's no reason for us to give up and
ask AI to define ethics for us.

~~~
Houshalter
>Indirect normativity is a losing strategy because you don't know what the
resultant ethical system will be.

That's the point...

>Humanity has already solved ethics

Where on earth do you get this idea?

~~~
yanazendo
Awareness of ethical systems. What makes you think ethics isn't solved?

Here's a relevant essay [http://medium.com/@yanazendo/how-to-be-lazy-and-
successful-a...](http://medium.com/@yanazendo/how-to-be-lazy-and-successful-
at-the-same-time-1c283c6880b3)

~~~
Houshalter
That's not even close to a formalization of human morality. Such a thing
probably isn't even possible. See this:
[http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/](http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/)

------
SwellJoe
And, not one mention of the fact that a huge portion of current commercial AI
research is going into figuring out how to get people to click more ads or buy
more things. "Deep learning" research at Google, Amazon, Netflix, Microsoft,
and many others, is centered on advertising and commerce. They aren't calling
it AI, but they're hiring AI people to do it and they're working with AI
techniques and tools.

I consider this dangerous, in a way that I can't really put my finger on. Is
it merely that some of the best minds in the field are being distracted by
selling more crap that people don't need? Or is it that AI will have at its
developmental core a model of inducing consumption as its prime directive? Or,
am I just an asshole that doesn't like too many things?

~~~
agibsonccc
As someone in the deep learning space who's not working at one of those
companies (deep learning on hadoop with cloudera/red hat model is my game) I'd
like to add here that while these labs are doing stuff with ads, it's not
necessarily all that they are focused on.

I've visited a few of the teams and I'd like to say that many of the things
being published out of there (say the recent real time vision stuff with cars
published by yann lecun's group) or perhaps the voice stuff coming out of
baidu.

That being said: I think you're ignoring the structure of the data that are
involved with click through rates and the like.

Reconstructing sparse data and reccommenders is among the hardest problems in
AI. Couple that with the at scale unstructured data(being media like text,
audio) being used and we can see some real landmark results.

Real value is being created alongside some of the commercial research being
done.

As for use cases of deep learning outside of ads, my customers are working on
many cool problems that don't involve targeting. I've done training as well as
deployments with a variety of companies now. Media centric products coupled
with users are among some of the coolest datasets out there. Let me say that
search and similarity among data is by far the best application of this stuff
outside of yet another conv net classifier.

There's a practical side to this stuff that doesn't involve ads. I hope that
helps a bit.

~~~
atrilla
> There's a practical side to this stuff that doesn't involve ads.

Just have a look at the AIMA book
([http://aima.cs.berkeley.edu/](http://aima.cs.berkeley.edu/)), it's full of
applications not related to advertising (both of its authors, Russell and
Norvig, have signed the letter). I'm going to implement many of them in my
blog ([http://ai-maker.com/](http://ai-maker.com/)), so if you want to have a
lot of fun with AI, join me in this learning quest.

~~~
agibsonccc
I did most of my first homework assignments out of that book a lot of years
ago ;). I'm actually writing my own book on deep learning with oreilly now.
Plenty to do ;)

~~~
atrilla
I'd the thrilled to hear your story with AI and OReilly. It must be very
encouraging to have this project with such an important editor.

~~~
agibsonccc
Email's in profile.

------
nl
I really like their short summary of the automated driving problem:

 _If self-driving cars cut the roughly 40,000 annual US traffic fatalities in
half, the car makers might get not 20,000 thank-you notes, but 20,000
lawsuits._

I'm convinced this is an area that deserves serious thought. I've mentioned it
previously on here, but it's quite possible to develop "AI" driving software
that make dramatic improvements to the overall safety outcome, but can be
dramatically less safe than manual driving in certain (rare) circumstances. As
a related example, there was a laptop manufacturer a few years ago who
developed a "face unlock" feature, and neglected to train it on people with
darker skin. That was unfortunate, but hardly life threatening. In a car it's
easy to imagine much worse outcomes.

~~~
brownbat
> the automated driving problem: >> If self-driving cars cut the roughly
> 40,000 annual US traffic fatalities in half, the car makers might get not
> 20,000 thank-you notes, but 20,000 lawsuits. > I'm convinced this is an area
> that deserves serious thought.

My fringe opinion on this topic is that automated cars actually simplify
insurance and liability considerably.

The case history of product liability isn't common knowledge, nor are maybe
the history and fundamentals of insurance, but both areas are instructive.
Each area hints that when accidents are caused by automated or designed
processes, rather than spontaneous human decisions, everything gets a lot less
messy.

Here's a case very commonly taught to law students around the country:
[http://online.ceb.com/calcases/C2/24C2d453.htm](http://online.ceb.com/calcases/C2/24C2d453.htm)

In it, a coke bottle explodes, and Justice Traynor explains in concurrence
essentially how the industrial revolution had shifted the optimal way to
allocate the cost of freak accidents. There's a parallel shift for automation.

If you have manufacturers absorb all the risk and damage of accidents
involving automated cars, then the incentives are in the right place. This
doesn't work right now. You can't just automatically assign manufacturers all
liability with manual cars, because after an accident you have to sort out how
much of a dumbass or how drunk the driver was, with lengthy determinations as
to percentages of fault for all involved.

Automation strips that whole problem out of the equation (so long as one party
wasn't driving a manual car). It would drastically simplify liability and
insurance in this area.

Corner cases where a highly uncommon set of conditions prompt accidents? We
have those now. Look at all the cases about delayed recalls in the auto
industry. They're almost all about whether they expected the set of conditions
to occur or not. Companies already deal with these issues, because we're
already using complex machines.

~~~
nl
I _hate_ the liability view of error handling.

I completely understand that knowing failure rates and likely liability payout
leads to manufactures being able to plan .

However, I think it is completely unacceptable to consider a particular type
of predictable "accident" acceptable because it is rare.

I think standards and safety testing are needed as well as liability laws.

~~~
brownbat
> consider a particular type of predictable "accident" acceptable

That doesn't actually happen in modern liability cases, because the system
shares your intuitions.

If a car manufacturer's defense in class action lawsuit is, "Yeah, but it's
only a few people," the court will bankrupt them with punitives on the spot.

This isn't about giving people a free pass to hurt small numbers of people,
it's about assuming they're responsible when they make things that hurt
people, rather than wasting time figuring out responsibility from first
principles after every single incident.

------
fiatmoney
To be clear, this is research priorities for "given that we have proto-AI
technologies, how do we manage the impact on society and make sure the full-AI
tech doesn't do something vastly unintended."

Which is somewhat worthwhile, but presupposes the existence of general AI
techniques, which is an incredibly low-investment area. Seriously, the amount
of research being pointed at anything that's plausibly "full AI" in nature is
miniscule. If anything it's lower than in the 80s (and maybe appropriately,
given the lack of payoff for the more general work).

(I could go on in detail about how there is approximately zero research being
devoted to, eg, automated refactoring of large codebases and other stuff that
you'd expect to be absolutely essential to general AI. But I'll just stop with
the naked contention.)

And it's rather silly to conjecture effects when you don't even know the order
of magnitude of the timelines, the resources required, or the bootstrapability
of the technology. There is a huge difference between "a Top100 cluster can
emulate a human brain on a 1/100 timeframe" and "a cell phone possesses
general-purpose optimization algorithms that are better at learning arbitrary
tasks than your average human".

~~~
munin
> I could go on in detail about how there is approximately zero research being
> devoted to, eg, automated refactoring of large codebases and other stuff
> that you'd expect to be absolutely essential to general AI.

the programming languages community has a fairly significant amount of effort
devoted to automated re-factoring of large codebases, there is even a DARPA
program for this:
[http://www.darpa.mil/Our_Work/I2O/Programs/Mining_and_Unders...](http://www.darpa.mil/Our_Work/I2O/Programs/Mining_and_Understanding_Software_Enclaves_%28MUSE%29.aspx)

this isn't seen as "AI" work though...

~~~
fidotron
That's the classic AI success story problem in a nutshell: anything successful
from the AI field gets rebranded as something else.

~~~
ScottBurson
In this case the situation is different. AI researchers didn't succeed at
program analysis; they gave up. @fiatmoney is right: it's no longer considered
an AI problem. But computer scientists haven't solved it either, because they
keep bumping their heads against the computability ceiling. It's very clear to
me that AI is actually required for general, accurate program analysis.

~~~
munin
why? we get general and accurate program analysis all the time using just
unification and simple inference rules...

~~~
ScottBurson
No, you don't. You may be able to infer some properties of some programs, but
you can't take an arbitrary program of real-world size and answer an arbitrary
question of a kind that a human maintainer of that program would want to know;
for example, under what circumstances, if any, it ever performs an invalid
operation -- such as an array bounds violation, throwing a runtime exception,
whatever is considered invalid in whatever language it's written in.

 _That 's_ what I mean by "general and accurate".

~~~
munin
yes you can, in many dimensions. one of those we call "type checking". other
examples are the myriad examples of using model checking and symbolic
execution to identify errors in programs automatically. can you identify _all_
errors? no, but that has nothing to do with weak or strong AI. but you can
definitely learn general properties about general programs using today's
technology.

~~~
ScottBurson
I'm certainly not saying that static analysis is useless; that would be an odd
statement considering it's the field that pays my salary!

But there's still a very big gap between what we _can_ do and what we _want_
to do. None of the extant static analyzers found the Heartbleed bug, for
example.

What I'm saying is that to close that gap will require AI. I don't think it's
just a matter of better algorithms.

------
mturmon
The attached paper ("Research priorities for...") reads like a white paper
intended for an NSF program office, with the idea that it might be a skeleton
for a research initiative and CFP from NSF.

I also have to say that it is a very bureaucratic response to a deep issue. It
seems narrow in comparison with the scope of the issue. Maybe that's what
happens when you get a bunch of specialists together.

------
jeffreyrogers
The tacit assumption here is that we can predict what aspects of AI will be
most beneficial. But if you look at the inventions/discoveries that have most
affected the world over the past few decades I think you'll find that they are
largely unpredictable, e.g. the internet, computers, antibiotics, etc. It's
actually difficult to find a non-incremental improvement that was planned. We
have such a glut of scientists right now that it doesn't make sense to focus
them in any specific direction. AI is a popular field and people are going to
be working in it, regardless of whether we adopt a set of research priorities
or not.

~~~
graycat
> adopt a set of research priorities

Try this explanation on for size: Have the whole group, with a lot of
signatures, etc., come out with a big statement, a paper, a position, policy
paper, on what the AI _research priorities_ are.

Then, presto, bingo, any prof looking for grant money can pick one or a few of
those _priorities_ , adjust the direction of his work, and claim to both the
granting agency writing the check and the journals to publish his papers that
his work is on the _priorities_.

And just what is the solid basis for the _priorities_? Sure, that paper. And,
reading that paper, it looks solid, right? I mean, it very much looks like the
_right stuff_ , right?

Or, maybe an AI researcher wants to do something else. So, they can claim that
their work is _leading edge, disruptive, original, crashes through old
barriers, is better than the conventional_ , etc., right? And maybe that
approach will get grant money, accepted papers, tenure, etc.

Sounds to me like a way to help get grants and get papers published and, thus,
to keep the _parade_ , _party_ , whatever going.

Meanwhile, back to good engineering, etc.!

That is, if the paper looks weak for its claimed, obvious purpose, then maybe
look for the most promising non-obvious purpose! E.g., some people are good at
manipulation, and some people are gullible. Or, "Always look for the hidden
agenda."! Or, don't be too easy to manipulate!

------
graycat
I was in a group that did some research in AI, worked with several famous
companies, met lots of smart people, wrote and shipped some software,
published a lot of papers, gave a paper at an AAAI IAAI conference at
Stanford.

Summary: The good papers at the Stanford conference were really just good
cases of traditional engineering, statistics, etc. Anything like _artificial
intelligence_ (AI) beyond just such traditional approaches had next to no
significant role. By then, AI just looked like a new label for old bottles of
wine, some good, some bad.

The "Open Letter" and the paper it links to look like more of the same: The
solid, promising work is traditional approaches in good engineering,
statistics, optimization, etc. The rest presents severe problems in
specification, testing, verification, even security.

Maybe the best thing to do with AI that is not just good engineering, etc.,
that is, software and systems that are built but not really designed or well
tested, is at least to be careful about security and keep the thing locked up
in a strong padded cell and, especially where any harm could be done, take any
of its output with many grains of salt and as just a suggestion.

It appears that, bluntly, we just do not have even as much as a weak little
hollow hint of a tiny clue about how _intelligence_ works, not even for a
kitty cat.

When I was in the field, an excuse was that _artificial intelligence_ was not
really our work or our accomplishment but only our long-term _goal_. Hmm ....

------
stillsut
The challenge isn't creating the "Intelligence" but creating a virtual
"Environment" that trains the artificial agent to do tasks that learn and
alter its environment.

With a purely computational environment, all tasks - that can be
learned/mastered - can be reduced to basic complexity classes, e.g. prime
factorization. To our "natural" intelligence, this seems a contrived skill but
consider...better prime factorization would allow the artificial agent to mine
all the bitcoins, break passwords, etc making the AI a very rich "in the real
world". And compared to a human calculator, even the worst computer looks like
a super-intelligence in this domain.

So to learn the skills we consider important, there needs to be a way to make
the AI experience and alter a non-virtual world - the physical (macro) world
we live in…basically some physical robot that does I/O from the environment to
the intelligence. And until the AI has some reason to optimize for behavior in
the non-virtual world - have the most kids instead of have the most bitcoins
-natural and artificial will remain alien to each other as neither will
recognize the others' reality.

With this insight, I'd diagnose the bottleneck in AI to be in engineering
robotics at least as complex as insects, to which we're a long way off.

~~~
Retra
Complexity doesn't solve problems. Making our robots more complex isn't going
to make AI smarter, it's going to make it more confused.

I think your 'insight' is likely to be a false analogy.

------
brandoncarl
If anybody has interest in working on some of the Econ-related issues, please
tweet me at @brandonjcarl

------
zackmorris
I think the top priority should be how a hierarchy of weak AI agents can form
strong AI with the ability to rewrite its own notions and not become unstable.

I've seen very little about this, except for a video where (as I recall) a
multilayer neural network can be trained while holding certain connections at
a constant value. That way the network can be trained to do different tasks
and then put back into those modes by setting those training connections to
various values during runtime:

[https://www.youtube.com/watch?v=VdIURAu1-aU](https://www.youtube.com/watch?v=VdIURAu1-aU)

Eventually another neural net could be trained to know which values to hold to
get different behaviors. It sounds like one of the goals is to reuse logic for
similar tasks, and eventually be able to adapt in novel situations like our
brains do. I believe there was also a bit about how time figured into all of
this, but I haven’t watched it for a while.

Anyway, I’m thinking that a lot of what we’ve learned about version control
systems like git could be used to let an AI brainstorm in a sandbox and
replace its own code as it improves, and always be able to revert or merge if
needed. Throw in some genetic programming and decent lisp machines with 1000
cores and we could really have something.

When I think about this sort of “real work”, as opposed to the daily minutia
our industry gets mired in, it makes me sad inside.

~~~
atrilla
This aggregation/collaboration of weak agents... isn't it boosting? It's
certainly a recommended approach to building robust systems, see Pedro
Domingos' paper (tip number 10):

[http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf](http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf)

But I had the impression that the letter had a more transcendental feel than
just focusing on a particular tuning technique.

------
atrilla
I'm glad to read such positive attitude towards AI... I recently discussed in
another thread if AI was indeed pushing people into the unemployment lines (an
opinion from an MIT Tech Review). Link as follows:

[https://news.ycombinator.com/item?id=8866253](https://news.ycombinator.com/item?id=8866253)

Parent:
[https://news.ycombinator.com/item?id=8863279](https://news.ycombinator.com/item?id=8863279)

As I said, I am so firmly convinced that AI has so much good to do that I just
created a blog ([http://ai-maker.com/](http://ai-maker.com/)) solely dedicated
to AI and its applications, and I'm going to dedicate my spare time for the
following years to grow this side project into something awesome, because
that's where AI is leading us.

~~~
gjm11
You have posted _four_ links to your blog just in this thread. Forgive me if
you find this rude, but I think that's at least two more than necessary.

~~~
atrilla
Sorry for that. I just found it appropriate for each of the different sub-
discussions (and I just removed one of them). When threads get massive with
text, I sometimes miss interesting stuff and I didn't want this to happen to
other readers.

I'm starting this side project and recently HN is on fire wrt AI. I'm
seriously willing to put a lot of time in this, and the more people I can help
the more I can learn.

Sorry again if that bothered you or any of the other watchers here.

------
lifeisstillgood
I propose Brian's Law - any AI sufficiently advanced to learn faster than
humans would learn from human history and stop talking to us.

------
deepsearch
In Silico Cognitive Biomimicry: What Artificial Intelligence Could Be
[http://genopharmix.com/biomimetic-
cognition/in_silico_cognit...](http://genopharmix.com/biomimetic-
cognition/in_silico_cognitive_biomimicry.html)

------
biomimic
Mimicking a portion of human cognition should be #1. For example:

What do you think of when you think of the word "sky"? Our machines and
algorithms need to produce an answer similar to yours.

~~~
Retra
We don't want computers to be as dumb as we are. We already know how to make
human brains that can answer such questions poorly; what good does it do to
have a machine do it too?

~~~
mc808
There are cases where it would help to have realistic simulations instead of
real humans. E.g. artificial companions, negotiations, public relations, or
experimentation that would be unethical on living humans (unless we end up
with a theory and evidence indicating that machines can be sentient). Or there
may be some huge task that would benefit from having trillions of "people"
working on it.

~~~
Retra
That's true, but there is no known way to verify proper operation of those
kinds of machines. What happens if your artificial companion stops liking you?
What if your negotiation robot throws a tantrum?

Well, then the robot isn't doing what we want it to do. But you asked for
human behavior, and that's what you got. There is not really any point in
making robots do what humans do. We want them to do what humans can't do --
even if we want them to seem human while they do it.

Addendum:

>(unless we end up with a theory and evidence indicating that machines can be
sentient).

I think we'd sooner find a theory indicating that sentience is either not
well-defined or not a useful concept for making decisions. If we found that
ants were sentient, we'd still step on them and not care.

This kind of talk is exactly the kind of reason I feel most people are useless
for understanding ethics: you can't make decisions based on the answers of
arbitrary grammatical yes/no questions. It's the same reason mathematicians
don't really care if P=NP is true or not. You can just suppose one way or the
other and you'll be no better off without a proof.

The value in the proof is an advancement in conceptualization and language.
Similarly, an important advancement in ethics/morality/AI will very likely
make your concerns about sentience obsolete in a way that should be obvious to
you.

------
jostmey
Out of all the things to worry about AI is at the bottom of my list. 3rd world
countries, declining air quality, war, ect. are all things more deserving of
our time and energy.

~~~
yanazendo
Wrong. You underestimate the power of AI. AI profoundly affects economics,
since economics is fundamentally computational.

~~~
lifeisstillgood
I think a plain wrong is mistakenly assuming you both have the same definition
of AI.

I suspect the parent comment meant "of all things, an Intelligence Explosion
by general purpose AI", which to be fair even the group set up here to study
do not seriously believe in.

You are right in that It's almost impossible to deny the impact that some AI
research has lead to - but that is the field of AI not the existence of "real"
AI.

You might both be talking past each other

~~~
jimrandomh
> I suspect the parent comment meant "of all things, an Intelligence Explosion
> by general purpose AI", which to be fair even the group set up here to study
> do not seriously believe in.

The question of whether there it will end up actually happening is unclear,
but my understanding is that FLI takes the possibility of intelligence
explosion pretty seriously.

This is a difficult question to meaningfully engage with, without a lot of
background research. Nick Bostrom's book "Superintelligence: Paths, Dangers,
Strategies" is a good entry point into the subject.

------
Maro
Disclaimer: maybe I'm not the target audience of this letter.

What a terrible letter. Hemingway grade is "20, bad".

451 words, but the only hint what the authors wants is "We recommend expanded
research aimed at ensuring that increasingly capable AI systems are robust and
beneficial: our AI systems must do what we want them to do ... In summary, we
believe that research on how to make AI systems robust and beneficial is both
important and timely, and that there are concrete research directions that can
be pursued today." But, reading this, I just think, "Sure, that sounds good."

There's an attachment (which I won't read), but it would have been nice if the
original document contained some information what the point of this is so I'm
motivated to read it.

~~~
jimrandomh
The attachment _is_ the letter.

~~~
rgbrgb
Link:
[http://futureoflife.org/static/data/documents/research_prior...](http://futureoflife.org/static/data/documents/research_priorities.pdf)

