Once I asked a Prolog expert at a conference "Prolog is truly mind-bending and powerful, but why don't we see it used more often? where are the jobs so I could get one" and they said that for those who use Prolog in production it is a real competitive advantage that they don't want to announce to the world; plus some are state actors and quasi-state actors who use it for huge scheduling and optimization problems and like their secrets secret. I think I believe that.
with due respect to prolog (which is a mindbending language), the proponents of every niche language say the same thing. "it is so powerful that users don't talk about it and that is why you don't hear about it"
I've heard many variants of this with respect to Forth, for example.
The reality is that devs can't stop talking about the languages, tools, and frameworks they use.
The simpler explanation is that next to no one actually uses prolog in productions, because it is, well, a niche language (which doesn't take away from its coolness)
Color me skeptical about "secret weapons".
I know many people that do too and don't talk about it unless you move in the Prolog world.
Here's a stock broker using it https://dtai.cs.kuleuven.be/CHR/files/Elston_SecuritEase.pdf
Java Virtual Machine specifiction is verified using prolog ", implemented the Prolog verifier that formed the basis for the specification in both Java ME and Java SE." https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-0-pr...
Prolog code -> https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-4.ht...
IBM Watson was written in C++ and Prolog
You can often find a Prolog system behind complex scheduling and resource planning systems for example. It's no longer pure Prolog nowadays, you also have constraint logic programming and other solvers hooked in (see my other comment on ECLiPSe CLP, http://eclipseclp.org/) but Prolog is still often the host language. The ECLiPSe web page has for example a reference on Opel, the car maker, optimizing its supply chain with it. Sicstus (https://sicstus.sics.se/) is also present in this space.
In a very different domain, IBM publicly commented on how they use Prolog in their Watson system, the one who won Jeopardy (the name was over-used afterward):
I was left wishing that there was a better way to express the logical constraints, which in any case can be converted to integer linear equations, while keeping the linear equations where needed and possible.
SWI-Prolog's clpfd library allows you to mix linear and non-linear integer constraints with "reified" versions that allow you to write implications and such. Silly example:
?- [A, B, C] ins 0..3, A #< B #==> C #= A * B, A #>= B #<==> C #\= B.
Also cited at https://www.metalevel.at/swissues
I also read it on SO, but can't find it. I do however remember that it was said that the SWI lib was ported from the SICSTUS one, and that mistakes were added in the process, and that it couldn't be recommended for that reason.
 From http://eclipseclp.org/features.html:
"Solver Interfaces. ECLiPSe interfaces to the COIN-OR, CPLEX, Gurobi and XPRESS-MP linear and mixed-integer programming solvers, and the Gecode finite-domain solver. Other interfaces are under development."
Yet, nothing is really a secret in the world of software development.
Why is prolog among the rare few who can keep a secret ?
When I encountered prolog I tried to understand it, and on the assigment -something about stacking dice having same or different values on the faces, I don't recall it- I managed to write prolog with explicit backtracking. That means that I had several "functions" with different fixed parameters that where the recursion level where they were at, and like a hundred lines of code for this assigment.
TA called us in, trying to understant that, we explained and he showed us the "real" solution, in a handful of lines of code. Then it clicked, I understood it.
Without that moment I would find prolog too alien, and I guess most people do, that's why it isn't more present in the world. It's too different to "actual programming", that is, it's too different from imperative programming. Even haskell and the like are closer to python and the like than prolog. Heck, even lisp can be closer to them.
It's not just learning a new programming language, it's learning a new way to think, not everyone can do that.
After that, you move to programming checking validation of facts and finally to behavior that resembles 'normal' programming. I agree that it's a massively different mindset and the capacity to grasp prolog quickly or be interested in prolog programming should be considered a reflection of the adaptability/curiosity of a person, IMHO.
- paradigm is high level, borderline esoteric to the main dev stream
- it's not the IT culture, it's expressive and mathematical, not tool nor community oriented
people in this area enjoy their own quests and don't create huge fads
Prolog is actually much closer to the computer science mindset than the statistical machine learning algorithms that are so popular right now. The kind of mathematics required to understand it are already in the computer science student's toolbox- discrete, combinatorial maths and logic.
I find it strange that there should be so much interest from programmers for a field that is completely alien to their background. It's a bit like if neurologists suddendly started asking "how does one get started on quantum entanglement, without first learning about general relativity and quantum mechanics?".
And I find it strange in particular when compared to the lack of any such interest in logic programming, for which programmers, like I say, already have the necessary background.
So, my conclusion is, the fact that most programmers are not interested in prolog has nothing to do with how hard or easy it is to learn the language and much more with practical considerations, like what happens to be popular in the industry right now.
That said, prolog is also very non procedural (unlike ML dsls which are layers on top of mainstream OO). It shifts your thinking into metalevel / denotational very quickly (since you don't think about one execution path but nested nearly pure enumerations). I think that's even more mind bending than tensors..
As for your comment about size, Prolog was designed in part to parse things, so it's really good at it. Some other programs in it are trickier and require a certain affinity to Prolog way of thinking.
If someone made a Prolog with cleaner syntax that completely segregates rules from hints, I would probably use it a lot.
"Compared to these three [Fortran, Lisp, Smalltalk], Prolog has fallen far behind. This was not the case in the early 1980’s, when Prolog had caught up with Lisp in capturing mindshare of what you could call non-IBM computing (to avoid the vexed term “AI”). Hence the title of this article. As culprit (or benefactor, depending on how you look at it) I identify the Japanese “Fifth-Generation Computer System” project, which existed from 1982 to 1992.
Even for those who were aware of the project at the time, it is now worth reviewing its fascinating history in the context of its time. This article is such a review as well as a theory of how Prolog was killed and how Lisp was saved from this fate."
Richard Grigonis commented in 2014 (incorporated in a Postscript to the article):
"The funny thing about this is that, in 1982, as I recall, Fifth-Generation Computer Systems project director, Kazuhiro Fuchi, came to Columbia University in New York, along with Edward Feigenbaum, to give a speech and answer questions of students. Feigenbaum was railing about how the Japanese were going to take over AI and the world, and we should better fund AI researchers in America or we would all be left behind in the dust. It was as if he was using Fuchi as a prop to get more excitement in America for AI. ...
When the question-and-answer period came..., I raised my hand and said, “I hate to be the fly in the ointment here, but this whole thing is based on Prolog? A sort of embodiment of the first-order predicate calculus? Even with trimming the search space in various ways, paramodulation, etc, if you use logic as a knowledge representation scheme it always leads to a combinatorial explosion, doesn’t it? Even with parallel processing, if you encode all of the facts of the real world into this thing, you will end up with ‘all day deductions,’ won’t you?”
Feigenbaum looked around uncomfortably, swatted the air like he was indeed being pestered by a fly, but then, amazingly (and much to his credit) said — and very quickly at that — “Well, yes, he’s right. BUT Americans need more support because the Japanese are advancing the field!” Feibenbaum quickly moved the session forward.
It was the strangest moment. My friend Mike, who had tagged along to watch know-it-all me get verbally obliterated by this erudite group, was stupified, incredulously uttering, with a tone of disbelief in his voice, “Oh my God Richard, you are right!” ...The top American researchers knew the FGCS was completely flawed, but we were humoring them and making a big deal of it so we could get better funding for other, LISP-based projects in the U.S."
Van Emden concludes:
"A quick way to get an idea of the promise of Prolog is to read “Prolog: The language and its implementation compared with Lisp” by D.H.D. Warren, ACM SIGPLAN Notices, 1977, pages 109-115. Warren shows that in the four years of Prolog implementation development an efficiency in terms of execution speed and memory use was reached that equalled what was reached by a quarter of a century of Lisp implementation development. This is remarkable for a language that in some repects is more high-level than Lisp.
The Japanese were smarter than researchers like Feigenbaum in that they took the trouble to discover that Prolog was a different animal from resolution-based automatic theorem provers, where the search space was pruned by the paramodulation technique you mention and by several others. Prolog is also based on resolution logic, but its inference is restricted to mimicking the function definition and the function call mechanism that has been the mainstay of conventional programming since Fortran. As Lisp also relies on this it is not surprising that since 1977 their performances are similar. In applications where Lisp need not search, Prolog does not search either.
I don’t want to suggest that Feigenbaum should have switched to Prolog, although I may have told him so during the rare and brief meetings we had in the 1980s. My present opinion is that the difference in the strengths of the two languages does not make one of them overall superior to the other. Other things being equal I might now recommend Lisp because its development has steadily continued since the time when interest in Prolog plummeted with the demise of FGCS.
I believe that FGCS was a plausible idea and was similar to the idea behind the Lisp machines of the time. FGCS failed because it failed to come up with a convincing demonstration. Such a demonstration should have come in the form of at least a prototype comparable to a Lisp machine. It could have been clear from the start that a project headed by government bureaucrats and staffed with technical people on loan from big companies had little chance of coming up with a convincing prototype.
A Prolog version of a Lisp machine was at least as promising as the Lisp machine itself. I believe that the failure of the Lisp machines was not predictable. Around 1990 everybody was caught off-guard by the rapid drop in price and rise in performance of commodity PCs. There were a few exceptions: Larry Ellison and Linus Thorvalds come to mind.
“The Japanese Did It” is not the correct answer to “Who Killed Prolog?”. Prolog was killed by the failure in the early 1980s of mainstream AI researchers to find out about Prolog, by their willingness to play along with the “appropriate responses” to FGCS, and by their use of FGCS’s failure as an excuse for continued neglect of a promising alternative to Lisp."
There were a few. See for example the Melcom PSI:
Mitsubishi planned in 1986 to sell 500 of them for a price of $199000 each.
The real question of course is- what doesn't? What is a knowledge representation that has expressive power equivalent to the first-order predicate calculus, but doesn't cause combinatorial explosion? The answer is- no representation that we know of.
Indeed, much of the early AI work with Lisp also used first order logic-based representations and was also prone to combinatorial explosion. This was one important accusation levelled at the field by the author of the Lighthill report, that brought on the first AI winter, at the time of John McCarthy and Donald Michie.
Van Emden points out of course that combinatorial explosion was a concern in those very early days of logic-based AI and logic programming in particular, but not anymore.
This was shortly after the demise of Lisp machines, which our CS department had a fair stack of. The gist of the discussion was that a) language doesn't matter that much, and b) this was yet another example of special-purpose hardware being unable to keep with general-purpose hardware.
This was also the professor who gave a talk at the computer science department of Texas A&M and came back saying, "that's not a real school." :-)
In the 1980s I spent months writing a prototype planning system in Common Lisp (which is a primary language for me, and I wrote a CL book for Springer-Verlag back then). Even though I was a Prolog novice, I was able to rewrite my entire prototype in Prolog in one week. Admittedly this was because the second implementation of something is always easier, but also because Prolog was such a good fit to this particular problem.
I remember reading The Fifth Generation Computer about Japan’s AI efforts - enjoyed the book but it didn’t really click for me, mostly because I had a Lisp Machine and considered that the `one true path` for AI development at the time.
Would like to have some excuses to actually use it on my daily work.
As side note, until our compiler design lectures decided to switch the required implementation language to the newly arrived Java, it was possible to choose whatever programming language to implement our assignments.
However Lisp and Prolog were off limits as they would make them too easy!
Prolog was a mind bending language. Some things that would be so simple in another language were so hard, and somethings that would be hard, were so easy.
I never really got my head around Prolog, just enough to get the assignment finished. But I do remember it as being one of the more interesting assignments I did, interesting enough that I've managed to keep the code for 20+ years.
This question has been asked elsewhere, but is there a way to pass data back and forth with other languages, and let them control the execution flow? A C interface would seem like a good IR to target.
My apologies if I used the wrong terminology, I am not intimate with the more academic nomenclature.
I thought Lisp was commonly used to create domain-specific languages with different syntaxes. Wouldn't this be old hat for someone experienced with Lisp? (Asking because I don't have a lot of experience with Lisp.)
Several implementations support(ed) standard Edinburgh-Prolog syntax. For example the LispWorks implementation of Prolog supports both: http://www.lispworks.com/documentation/lw71/KW-W/html/kwprol...
I still found a
autocmd Filetype pl set syntax=prolog
One of the more extensive treatments of implementing Prolog-like features can be found in "Paradigms of Artificial Intelligence Programming" (PAIP) by Peter Norvig.
Book and code available here:
A key difference between the languages is in how easily you can reason about programs. In Prolog, if you program in the pure monotonic subset of the language, you get certain algebraic guarantees that are not available in Lisp. For instance, for pure Prolog code and a declarative reading, removing a goal can at most increase the set of solutions, and removing a clause can at most decrease the set of solutions.
In Prolog, these guarantees can be used to test and debug your programs by logical reasoning. In fact, to a significant extent, you can even automate debugging due to these properties. Such techniques were pionereed by Ehud Shapiro in his 1982 PhD thesis, Algorithmic Program Debugging:
The thesis is an excellent read, and I highly recommend it! Specifically, quoting from page 155:
“Although Lisp is considered a high level language, it is not
clear that it encourages precision and clarity of thought. As one
Lisp hacker puts it : "Lisp...is like a ball of mud. You can
add any amount of mud to it...and it still looks like a ball of
mud!". This aspect of Lisp is the hacker's delight, but the
theoretician's nightmare. It is known that one may implement
without too much effort a reasonable Prolog in Lisp. However, the
issue is not implementation — the issue is the method of thought.
Based on my experience with both languages I maintain that one's
thoughts are better organized, and one's solutions are clearer
and more concise, if one thinks in Prolog rather than in Lisp.”
Another key feature of Prolog is that constraints blend in seamlessly, leading to an important declarative programming paradigm called constraint logic programming (CLP). This is the key reason why Prolog is frequently used to solve combinatorial tasks in practice, and often an important motivation for buying a commercial Prolog system.
Lisp is generally not restricted to a particular 'level'. It also comes out-of the box with support for different programming paradigms - but no particular language support for logic programming.
> if one thinks in Prolog rather than in Lisp
Lisp supports high-level programming in logic by zillions of extensions. This has been popular throughout Lisp's history. Thus if we have a problem which is best solved by some kind of Logic, then it would be great of the Lisp system could be programmed in terms of that logic - which is not unusual.
That said, a programming language built from the ground on some logic formalism like Prolog is a great tool.