
Push programming language and genetic programming system implemented in Clojure - amirouche
https://github.com/lspector/Clojush
======
jwr
Incidentally, I implemented PUSH in Clojure myself, several years ago. The
language is well designed, it's a good match for genetic algorithms — you need
something which is interpretable after mutations, and very few languages
provide that.

I learned a lot about Clojure performance while doing that work. It's funny
how most of that knowledge is now obsolete anyway (because both Clojure and
the JVM evolved so much).

------
vosper
I've always figured that GP ought to be able to solve any problem, given
sufficient computing power. Yet it's never mentioned when people talk about
machine learning. Why is that?

Also, I'd love to hear anyone's experience with GP to solve a practical
problem - whatever language it was in.

~~~
lspector
Many of the "human-competitive" results that have received "Humies" awards
([http://www.genetic-programming.org/combined.php](http://www.genetic-
programming.org/combined.php)) used GP. Some colleagues and I surveyed the
first 10 years of Humies results in terms of techniques used and other
factors; preprint here:
[https://www.dropbox.com/s/tjpa6afxqibwnno/Anaylzing_the_HUMI...](https://www.dropbox.com/s/tjpa6afxqibwnno/Anaylzing_the_HUMIES.pdf).

In what I think is another exciting development (although I'm biased -- I was
his advisor), Tom Helmuth just finished his PhD at UMass using GP (and
Clojush) for automatic programming, automatically synthesizing programs for
first-year computer science assignments (abstract:
[https://www.cics.umass.edu/speakers/thomas-
helmuth/2015/jul/...](https://www.cics.umass.edu/speakers/thomas-
helmuth/2015/jul/27)). This is just the tip of an iceberg, but it's a big
iceberg.

As for why GP is rarely mentioned when people talk about machine learning, I
think there are a couple of reasons, but one important one is that GP usually
shines the brightest on different kinds of problems than other ML techniques
are applied to. If you put them head to head on a problem to which other ML
techniques can be easily applied then GP will probably lose. But GP can
succeed on problems for which it's not at all clear how one would even begin
to apply most other ML techniques.

~~~
DougMerritt
I agree with all of that, but it's also important to note that simulated
annealing also fills in a lot of those voids, and when it fits, is typically
far more efficient than GP.

So that narrows the application space further.

GP is fascinating on the right kind of problem.

