This not some built-in language feature or a bundled library from the distribution. It's still pretty cool, I remember playing around with it a few years back.
Kind of, though it's not exactly the Prolog flavor. In Prolog you also define such facts and rules, but then you only derive new facts "on demand". That is, if you ask whether Bob is the father of Jane, the system goes off and tries to find out whether that's the case. This is called "backward chaining" (https://en.wikipedia.org/wiki/Backward_chaining).
In contrast, the system as presented takes a set of facts and rules and automatically computes all of their consequences, before you ever get to specify what questions you want to ask. This is "forward chaining" (https://en.wikipedia.org/wiki/Forward_chaining). One similar system to the one presented here that implements forward chaining is CHR (https://en.wikipedia.org/wiki/Constraint_Handling_Rules), for which several implementations exist... including in Prolog.
Both approaches have certain advantages and disadvantages depending on the application in question.
I think the full history is a little more complex.
In the early days, there were two camps: the "connectionists" who worked on neural-net type stuff, and the symbolic reasoners who worked on hand-authored rule-based systems. Both fell under the "AI" umbrella, believed their approach was the one true one, and squabbled over funding and public perception. (Because public perception affects funding.) Remember that at the time, much AI research was government or defense funded, so politics was heavily involved.
The connectionists invented neural networks. The symbolic folks gave us Lisp, Prolog, and a lot of compiler and parser theory stuff.
The connectionists hit a wall in the sixties, and shortly after "Perceptrons" was published. That book deliberately pointed out the current limitations of neural networks and effectively shut down research into them for decades. It was one of the causes of the "AI winter" of the 80s.
After that, "AI" became roughly synonymous with symbolic reasoning and rule-based expert systems because that camp had won.
Then, in the 80s, backpropagation and other learning techniques for neural nets were finally figured out and those researchers started making progress again. Two AI winters had happened by then, so "AI" didn't have all of the positive connotations it used to (at least when it comes to funding) and the term almost solely referred to symbolic reasoning at this point, so they started using "machine learning" to refer to neural-network-based AI.
In the early 2000s, big tech companies found themselves with lots of cheap computational power and tons of data on their hands, the two key ingredients to make machine learning useful. Meanwhile, symbolic reasoning and expert systems had petered out.
So "machine learning" got bigger and bigger until eventually it became the main computer intelligence approach in town. On top of that, it's gotten smarter and smarter until the public has started associating it with the old image of what "AI" means. So now you see "AI" coming back to refer to what is, essentially, the same connectionist approach it used to include in the 60s.
C. Hewitt, P. Bishop, and R. Steiger, “A Universal Modular ACTOR Formalism for Artificial Intelligence,” in 3rd International Joint Conference on Artificial Intelligence., San Francisco, 1973.
Prof. Rodney Brooks (MIT, Robot Lab), who is famous for his subsumption architecture (SA) and against Minsky's central-model of representation (arguing instead for radically separate distributed systems), wrote SA and nearly all of his research in LISP. In fact, Brooks wrote a book on LISP programming and developed his own efficient LISP engine. Many, many of his grad students have gone on to become leaders of the AI (not ML) world.
Curiously, I am now reading "The Elements of Artificial Intelligence - An Introduction Using LISP" which depicts a "Knowledge Engineer" which stirring a cauldron labelled "Expert System". Copyright is 1987. It's a joy to see how far we've come in some respects, but how little progress in others. Perhaps this represents a measure of the maturity of certain subdomains?
I suspect in another decade "we" will rediscover the wisdom of those who have developed symbolic knowledge representation.
It's clear that AIs based solely on FOL are unlikely, but it's also likely to be true that any system needing to solve problems whose exact solution can't be found in steps polynomial in inputs, will require ideas that are similar to the core of the older approaches. There are problems where wide-ranging search can't be avoided.
Other nice perks of the hybrid approach are data efficiency, compact specification, easier composition, the ability to grow or alter your representation and generate new inferences on the fly (I mean inferences as a result of learning or conditioning on new information and not what people use when they mean prediction). If you learn a new fact, you can go back and explicitly work out the consequences on all the other facts and the inferences generated from them as well as generate new inferences that might not have existed before. These are more easily done with "symbolic" representations.
Combining with the strengths of DL based approaches: more robust, learns complex mappings, can learn non-trivial computations, can exploit indirectly specified structure, can approximate probability densities if trained correctly, will get the best of all worlds.
Here's a short presentation on the topic from a recent workshop: (and if you can, it's worth checking out the other presentations too) https://www.youtube.com/watch?v=_9dsx4tyzJ8
Not even at the level of language? I don't think it's obvious yet that ML can scale to doing all the things as well as natural intelligences. It's had it's successes, yes, but those are still in limited domains. There's no general purpose ML AI yet.
The difference between AlphaGo and human Go players is that while AlphaGo is superior at the game, you can change the game in arbitrary ways that the human players can easily learn and adapt their play for, but would require programmers modifying the code for AlphaGo. It can't just learn to perform any arbitrary task.
ML performs extremely well in very well defined settings, but computers have always been better than humans in those kinds of domains. That's why we invented them.
Pearl's career is all about causal networks, so he's slightly biased when it comes to survey like this.
If by "people" you mean the lay press and people who are not AI scientists, then maybe. But AI scientists know their history, usually and can place rule-based systems firmly within AI.
>> I haven't really done any machine learning or logic programming, so could be totally off, but glancing at this was confusing based on the title.
I understand your confusion but my intuition is that you are confused because you are only aware of very recent reports on AI, that focus entirely on machine learning.
It might help to clear up the confusion if you pick up an AI textbook, eg. Russel and Norvig (AI - A Modern Approach).