
Shrinking bull’s-eye algorithm speeds up complex modeling from days to hours - chmaynard
http://news.mit.edu/2015/shrinking-bulls-eye-algorithm-speeds-complex-modeling-days-hours-1116
======
alexholehouse
From a cursory reading of the article this looks like it's solving an
extremely similar problem to Gaussian Process Bayesian Optimization (GBPO [1])
in a fairly similar way. I wonder what a head-to-head comparison between these
two would look like.

[1] J. Gardner, M. Kusner, Z. Xu, K. Weinberger, and J. Cunningham, in
Proceedings of the 31st International Conference on Machine Learning (ICML-14)
(JMLR Workshop and Conference Proceedings, 2014), p. 937

~~~
spooningtamarin
Yep, MOE [1] is an example of an open source GPBO.

[1] [https://github.com/sigopt/MOE](https://github.com/sigopt/MOE)

------
dzdt
Sounds like simulated annealing described in a simplified-for-the-journalist
way. Anyone know what the new thing really is?

~~~
stkni
Original paper was linked from article and is here [1].

Although I'm vaguely familiar with MCMC and Metropolis, it's not nearly enough
to pass comment on this advance.

[1]
[http://arxiv.org/pdf/1402.1694v4.pdf](http://arxiv.org/pdf/1402.1694v4.pdf)

~~~
misterthirsty
The significant change in Bayesian methods is moving away from Metropolis
Hastings to an approach based on Hamiltonian dynamics, implemented widely in
the STAN package. It would be interesting to see if this is what is being
used.

------
arbitrage314
I don't understand this paper yet, but Marzouk is an awesome fella--a great
explainer, a brilliant researcher, etc. I almost took a job with him a few
years ago.

If I get through this paper quick enough, I'll post back any significant
thoughts I may have (if any).

