
An AI for AI: New Algorithm Poised to Fuel Scientific Discovery - rbanffy
https://blogs.nvidia.com/blog/2018/01/12/an-ai-for-ai-new-algorithm-poised-to-fuel-scientific-discovery/
======
randcraw
This article is mis-titled. The ORNL project seems to be entirely about
optimizing DNN hyperparameters (like Google's AutoML, I think), and _not_
about inventing novel DNN architectures.

Worse, the work has absolutely nothing to do with Scientific Discovery, which
is a difficult and hugely ambitious area of ML study which involves generating
novel induction of necessary mechanisms, not just the mostly random generation
and testing of plausible patterns against outcomes.

Unless a ML method proposes hypotheses that are based on the induction of
novel mechanistic principles, it ain't discovery. It's mere trial and error.

[Edited for clarity.]

~~~
chriswarbo
Yes, when I saw 'fueling scientific discovery' at first I thought of Adam and
Eve:

[http://www.cam.ac.uk/research/news/robot-scientist-
becomes-f...](http://www.cam.ac.uk/research/news/robot-scientist-becomes-
first-machine-to-discover-new-scientific-knowledge)

[http://www.cam.ac.uk/research/news/artificially-
intelligent-...](http://www.cam.ac.uk/research/news/artificially-intelligent-
robot-scientist-eve-could-boost-search-for-new-drugs)

Or work by the likes of Hod Lipson on discovering mechanisms/explanations from
observations:

[https://www.creativemachineslab.com/eureqa.html](https://www.creativemachineslab.com/eureqa.html)

I thought AI-applied-to-AI might be related to the work of Schmidhuber et al
on self-improving search algorithms:

ftp://ftp.idsia.ch/pub/techrep/IDSIA-16-00.ps.gz

[http://people.idsia.ch/~juergen/goedelmachine.html](http://people.idsia.ch/~juergen/goedelmachine.html)

[https://arxiv.org/abs/1210.8385](https://arxiv.org/abs/1210.8385)

------
philipkglass
The nvidia article is recent, but the referenced scientific paper is from
2015:

[https://www.researchgate.net/profile/Steven_Young11/publicat...](https://www.researchgate.net/profile/Steven_Young11/publication/301463804_Optimizing_deep_learning_hyper-
parameters_through_an_evolutionary_algorithm/links/57ac9b7c08ae3765c3bac448.pdf)

Presentation:

[http://ornlcda.github.io/MLHPC2015/presentations/4-Steven.pd...](http://ornlcda.github.io/MLHPC2015/presentations/4-Steven.pdf)

------
jorgemf
In the article:

> the team developed an algorithm that automatically generates neural networks

Title of the paper:

> Optimizing Deep Learning Hyper-Parameters Through an Evolutionary Algorithm.

I don't know how to feel when things like this happen.

------
make3
I stopped reading at "Modeled loosely on the connections in the human brain,
these do the “learning” in deep learning."

------
Blazespinnaker
“Scaled across Titan’s 18,688 Tesla GPUs, ”. I am sure that’s all Nvidia
really cared about, they’d write anything that made their customers look
smart.

------
vonnik
Like most automated machine learning, this looks incredibly
intensive/expensive computationally. I look forward to the day it's within
reach for more researchers. Until then, it's a weird kind of news that has
little impact on how most NNs get tuned.

------
JamieBeckett
However, I disagree about the lack of a connection between ORNL's work on
MENNDL and scientific discovery. I'm going off of what ORNL told me, but
according to them, using DL has been hard for scientists and this could make
it easier. I am a writer, not an engineer, but here is how ORNL explains it:
"Because scientific data often looks much different from the data used for
animal photos and speech, developing the right artificial neural network can
feel like an impossible guessing game for nonexperts. To expand the benefits
of deep learning for science, researchers need new tools to build high-
performing neural networks that don’t require specialized knowledge."

Their news release has more technical detail than I could put in my blog.
[https://www.ornl.gov/news/scaling-deep-learning-
science](https://www.ornl.gov/news/scaling-deep-learning-science)

~~~
JamieBeckett
and the papers mentioned below have yet more detail.

------
rdlecler1
The math behind Gene Regulatory Networks is basically the same as that used in
Neural Networks. Gene Regulatory Networks control cell state and cellular
development (including neurogenesis) and we can see neurogenesis as natures
way of using neural network technology (Gene Regulatory Networks) to build a
higher level neural network technology (Neural Networks).

------
JamieBeckett
You are right about the 2015 publication. I believe ORNL waited until it had
results from a user (Fermilab) to release the information. There are also two
related papers: \-- one is about the Fermilab's neutrino research using MENNDL
[http://ieeexplore.ieee.org/document/7966131/](http://ieeexplore.ieee.org/document/7966131/)
\-- and the other is about evolving deep neural nets in HPC:
[https://dl.acm.org/citation.cfm?doid=3146347.3146355](https://dl.acm.org/citation.cfm?doid=3146347.3146355)

------
deepnotderp
So neural architecture search turbo charged into 24 hours with the help of a
supercomputer?

------
arbie
How is this different from a parallel set of EA runs to tune hyperparameters
and topology?

~~~
kahnjw
Seems like these articles are becoming more common. Someone claims new meta-ai
alg will change everything. In reality, just another team's hyper-param
optimization setup.

~~~
goatlover
Are the AIs writing their own press now?

------
killjoywashere
The ORNL next gen computer is Summit. Lots, and lots of Teslas in there.
[https://www.olcf.ornl.gov/summit-early-
science/](https://www.olcf.ornl.gov/summit-early-science/)

------
denkmoon
Somewhat off topic, but I like that they named it "MENNDL". Kinda like a
fellow named "Mendel".

------
supermdguy
Sounds like this is one of the cases where an "evolutionary algorithm" is
essentially brute force search.

------
jchook
Folding AI against itself does seem like the theme of the next major AI
frontier. Already we have GAN networks doing incredible things.

