
Neural Programmer: Inducing Latent Programs with Gradient Descent [pdf] - dave_sullivan
http://xxx.lanl.gov/pdf/1511.04834v1.pdf
======
abeppu
When I was at school one of my professors joked that the halting problem would
ensure that, whatever else we were able to automate, there would always be
jobs for programmers. I wonder.

A while ago there was the automatic statistician [1,2] which can do various
statistical analyses and reporting automatically. This year there was a paper
out of MIT on Deep Feature Synthesis, in which a largely automated system did
quite well on Kaggle problems [3]. Now this, which seems like it could produce
solutions to some problems I've used in technical phone screens.

At some point, someone will write a framework which automates the process of
finding human cognitive tasks to automate, and someone else will give write a
cost function of automated-task-to-business-need-mismatch which is amenable to
optimization, and then we can all go home.

[1]
[http://www.automaticstatistician.com/index/](http://www.automaticstatistician.com/index/)
[2] [http://mlg.eng.cam.ac.uk/lloyd/talks/jrl-auto-stat-
msr-2014....](http://mlg.eng.cam.ac.uk/lloyd/talks/jrl-auto-stat-msr-2014.pdf)
[3] [https://groups.csail.mit.edu/EVO-
DesignOpt/groupWebSite/uplo...](https://groups.csail.mit.edu/EVO-
DesignOpt/groupWebSite/uploads/Site/DSAA_DSM_2015.pdf)

~~~
lern_too_spel
The halting problem applies equally to human intelligence.

------
norswap
This is going to be a very meta comment. You have been warned.

The whole field of machine learning makes me kind of nervous. The problem is
that the created systems seem kinda magical (quote Clarke, I double dare you).
Even their creators don't seem to really understand what goes on in them. It's
build it and see what comes out. It's heuristics all the way down.

My feeling: the lack of predictability that results makes these technologies
good, but not great. It's why Google's search results still suck. Why
suggestion engines are not always super smart or painfully transparent.

If one system in isolation is already hard to predict, what about interacting
systems? Increasingly, our experience is shaped by these systems (search
result customization etc). Isn't there a vicious feedback loop in there
somewhere that pushes us somewhere at the whim of the un-understandable
interactions of un-understandable machines?

~~~
mrdrozdov
SVMs, Kernels, Online Learning are pretty well understood. Neural networks are
an abuse of the chain rule and makes your model dependent on a bunch of
partial derivatives that you don't necessarily understand why are important,
but happen to be extremely effective when given lots of data. I think neural
nets are probably what you're referring to. If you understand the intuition
about your problem, then you can reason about what those partial derivatives
might represent. Except a lot of the time we don't really have that level of
understanding. What is your level of experience with machine learning? I'm a
little surprised with your perception.

~~~
te_chris
I'm not OP, but I think you're misreading their comment. Their point, if I
read it correctly, is not about the technical correctness of the ML in place,
but the sociological effects. You can see this happening already: I'm sure
FB's machine learning algos that they use to filter the stream are 'correct'
in some way, but their effect is insidious and polarising on the way we
consume information.

~~~
mrdrozdov
I think that's more a reflection of how Facebook chooses to use the results of
an algorithm rather than the algorithms themselves.

------
taliesinb
The rate of new ideas in deep-learning-type subfields is starting to remind me
of the Cambrian explosion, which generated in only a few million years a vast
range of genetic novelty to be whittled down into the familiar stock of body
plans we see today.

Indeed many of the interesting new papers are elaborations of old ideas along
new themes, or useful and elegant new combinations of old ideas, with the
occasional attention-grabbing paper when someone tries something that seems
like it can't work and it does, and we now have an entirely new thing in our
toolbox.

Here is a highly unusual paper that showed up on HN recently, to little
discussion.

It tackles a related problem to the OP paper but in a way that could not be
more different from the current default of gradients, linear algebra, large
training data volumes, hands free training.

It may end up being less impressive than it seems, but it is still a bit mind
bending.

[http://journals.plos.org/plosone/article?id=10.1371/journal....](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0140866)

------
obvio171
The paper says it's a general program induction solution. Does that mean one
can already feed Neural Programmer into itself (ie. make it learn from its own
previous executions' inputs—training sets—and outputs—learned programs) and
try to see if it comes up with a more efficient version?

------
aardvark179
ArXiv link [http://arxiv.org/abs/1511.04834](http://arxiv.org/abs/1511.04834)

------
nl
I for one welcome our new programming overlords.

Fuck that is impressive. I've seen some talks from Google Brain people
referencing bits of this but never understood the full picture. Damn. We
should all go home now.

------
primaryobjects
I feel compelled to also mention "Using Artificial Intelligence to Write Self-
Modifying/Improving Programs"
[http://www.primaryobjects.com/CMS/Article149](http://www.primaryobjects.com/CMS/Article149)

The link above describes a project where an AI wrote programs for Hello World,
addition, subtraction, multiplication, Fibonacci, bottles of beer on the wall,
and a bunch more.

