After a few years, it turned out that this approach only worked on problems for which hill climbing worked really well. You need a well chosen metric for "sorted", and that guides the hill climber to converging on a bubble sort. (But not Quicksort.)
This needs to be demonstrated on a harder problem.
As for Eurisko, I'm unaware of the source code being made publicly available, and several people appeared to have asked for the source, without success. However, I think enough has been published on Eurisko to reproduce it, or something similar, from scratch.
OpenCyc (http://opencyc.org/) was released in 2012, and is available on SourceForge. Cyc was released earlier this year (https://www.lucid.ai/press-releases/mit-tech-review-an-ai-wi...). Lucid (https://www.lucid.ai/) has several case studies of applications of Cyc by Cleveland Clinic, the U.S. Forest Service, a Large Global Bank, and a Global Energy Company.
> A result from AM/Eurisko doesn't mean much. A result from a deep network may be a crack in the dike which is about to explode and solve longstanding challenges like Imagenet.
Sounds like extreme and unsubstantiated bias. Statements like this is why I am highly skeptical of the current neural network hype.
I am comparing a single system and all its variants and followups to another family. Oh wait, there aren't any variants and followups to AM/Eurisko except Cyc. Huh. How about that.
> Sounds like extreme and unsubstantiated bias.
ImageNet? AlphaGo? SOTA on language parsing, classification, and prediction tasks? Human-level performance on scores of Atari games? High-quality image synthesis, unsupervised and from textual descriptions? Predictions of visual cortex activations? Program synthesis? If you aren't impressed, you aren't paying attention.
Cyc belongs to the family of rule-based expert systems. Expert systems were successfully used in medical diagnostics, chemistry, biology and various branches of engineering. Not to mention countless "trivial" applications in planning and logistics for businesses. I could also make a case that DeepBlue was an expert system, and thus add "superhuman performance in chess" to the list.
Saying that results from an expert system don't matter (simply because it's an expert system), while believing equivalent results from an ANN will "explode and solve longstanding challenges" (simply because it's "neural") makes no sense. ANNs are not magic.
I've done pure math to the graduate level and I've a modest amount of programming. But even with both these backgrounds, I've bitten hard by efforts to do applied math. It seems that applied math is very the domain of people who both know algorithms and know how to "squint correctly" at a given problem and see whether X algorithm is appropriate. And that seems to be a very necessary skill. Where the validity of one approach stops and another begins is fairly opaque until one is an expert.
The thing is that neural network appear to be fairly similar. They're somewhat generic but tuning again requires high level expertise and tutelage.
The worst case scenario is that deep learning solves a broad but still limited class of problem, expanding somewhat the nebulous area cover by applied math. Then all the expertise and credibility goes there and gradually, the nebulous area covered by deep learning stops expanding and progress actually becomes harder since there's less real knowledge, just general intuition about a single, quite general approach.
* ... tuning again requires high level expertise and tutelage.*
Appears to be more luck (or "serendipity", if one is generous) that is required!
As new algorithms crop up, everyone moves to them to redo older problems. That continues until the next new thing pops up. Pedro Domingo's book, The Master Algorithm", describes this somewhat:
Jump to the experiments section to get an overview of what it can do.