I think he was way ahead of the times then.
Nowadays? Not so much, integrate it with all sorts of fuzzers, ccache, distcc, whatever and let the "compile cloud" churn.
I've found Genetic Programming to be a powerful technique when the solution space is itself an algorithm.
ML research has moved on to deep learning, but I've always found that, coming from CS background, GA/GP is much more intuitive a mechanism.
The basic Genetic algorithm, accord to the text
1:Randomly create an initial population of programs
from the available primitives (more on this in
3:Execute each program and ascertain its fitness.
4:Select one or two program(s) from the population with
a probability based on fitness to participate in genetic
operations (Section 2.3).
5:Create new individual program(s) by applying genetic
operations with specified probabilities (Section2.4).
6:until an acceptable solution is found or some other
stopping condition is met (e.g., a maximum number of
generations is reached).
7:return the best-so-far individual
Which is to say GP seems more like a mix-in that can be added what ever algorithm (add some binary operations to the unary operations most algorithms effectively use to evolve). Yet GP is spoken of as an alternative all the other approaches, something wholly different. Plus what is spoken of as "genetic programming proper" is this basic loop plus a bunch of less explicit but very assumed approaches.
Those methods applied can be fairly broad, but not every monte carlo algorithm will work. Because of the nature of the problem space (computer programs), it's less common to see methods which assume the existence of a gradient in the space, or that the space is metric. More often you need more general-purpose, "weaker" techniques. You see simulated annealing and hill climbing, various combinatorial optimization techniques (ant colony optimization say), etc., but even now the dominant approach is to use evolutionary computation methods because they make few assumptions about the nature of the space beyond a smoothness criterion, and because they are very highly parallelizable.
-- Which saying that it can be formally just about anything but is knit together by various hunches and analogies.
This sort of approach has a certain strength through flexibility. I would offer that the strength of a fully formally defined system has the strength that is can be meta-analyzed - propositions about what the entire system is capable of can proven (for example, see neural network approximation theorem).
Also, neural networks today have a set-upon using "relu" layers, which basically "nothing but smooth" layers (continuous, piecewise-linear functions) and gradient descent works fine.
This is also usually the default situation, because often we really don't know anything about the problem space.
Of course if you do know something about the problem space in the form of a gradient or probabilistic graph, and then you use those things & you won't be doing GP anymore.
"Wolfram goes off onto something else. He talks about searching through the space of possible computer programs, for ones of a certain type that can solve problems, rather than the having to go through the labor of humans programming something for a specific task. His meaning is more than a little unclear here. Wolfram says we're constrained to follow historical patterns of engineering, to only create things such that we have designed them with an understanding of what they will do. Nature, he says, doesn't have this constraint. Alright, this make more sense. He's talking about mining all potential programs, including useful ones that we ourselves might not have thought of. That's quite an idea, and sounds like no small task to actually perform it.
Now he's bringing it in the direction I thought he would — comparing the search among the space of possible programs to natural selection, which is the natural equivalent of searching the space of possible biological programs. Now Wolfram is talking about whether we can search for our physical universe among the space of possible universes, and he's going still another level "meta."
But doesn't a search in general program space almost immediately result in a combinatorial explosion?
How on Earth are we supposed to find these programs that Wolfram dreams about?
In my opinion, what makes natural selection work but not GP, is that the life time of Earth and the number of individuals existed vastly exceeds the typical parameters of a GP simulation. Biological programs are also less likely to be killed off by a mutation compared to GP programs, so in some ways, the gradient is a lot smoother.
Many ideas of GP are getting reused in Bayesian program induction, in conjunction with differentiable programming, SAT solvers, etc . IMHO a very promising route to AGI.
Edit: Also, if you broaden the definition just a bit, you could call simulated annealing a kind of genetic program.
So if you're new to the field, I would treat this as an interesting "what if", a vision of the direction AI could have taken... if it had ended up working. But not as a direction to invest a lot of your time in hopes of it paying off in a job.
(And of course, maybe someone will make a breakthrough which rejuvenates the field. If you want to have fun hacking with it, I would never discourage you! Just want to emphasize that it's not the current state-of-the-art, and is doesn't _look_ like the future of AI, as far as we can tell)
Genetic programming has been expanded and generalized and is now a core aspect of many AutoML systems.
As another commenter mentioned, when the solution space is comprised of algorithms, programs, or even just functions, GP is still useful.
I am using ideas inspired by, and aligned with, GP in my work today on anomaly detection for multivariate time series data.
Deep learning is not the be-all, end-all just because it outperforms other algorithms on a small subset of possible data types (primarily images, and to a lesser extent, audio).
One interesting direction is the mixture of traditional ML and GP, and some of the “deep GP” variants that combine some of GP’s strengths with some of DL’s.
Imagine problem of domain but something definite would be nice as having trouble finding anything
Furthermore, how do you know if your program is “correct”? a programmer may use test cases and GP the same, but a programmer doesn’t just write code to pass test cases.
DL code writing methods are fine if you just want to generate a bit of code for some that pretty much already exists, but GP is supposed to find completely new solutions to hard problems... so you run into this issue of “what does it mean to solve this problem.”
I also think it hasn’t received the funding or talent it needs to progress. Probably something will come out of the field that suddenly sparks interest and it will take off again.
That's where coding theory comes in. I did some work in this area over 20 years ago during my PhD. The natural genetic coding is a very clever thing - it doesn't code the organism directly - it codes the instructions for building an organism. So naturally, there is a hierarchy of sites on the genome. The sites that code for the earliest instructions have the most impact on the final form, sites which are used later have smaller impacts. So as a human embryo develops, it only differentiates itself from a fish embryo, or a mammal embryo or a primate embryo by stages.
So the sites used as the earliest stages of phenotype development have the biggest impact on the shape of the overall fitness landscape, with smaller impacts from sites used later. this gives rise to a fractal-type fitness landscape - which has sufficient smoothness to enable the genetic algorithm to explore efficiently.
So if you want to make GP or any other type of GA effective, you need to think about your coding schema - coding schemas that build solutions through a process rather than those that directly code the solution are more likely to give rise to suitable fitness landscapes.
you are confusing genetic programming and genetic algorithm